I0322 12:55:44.012879 6 e2e.go:243] Starting e2e run "e0a4a0b4-537d-4a6a-87cf-96f70cc2f47e" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1584881743 - Will randomize all specs Will run 215 of 4412 specs Mar 22 12:55:44.201: INFO: >>> kubeConfig: /root/.kube/config Mar 22 12:55:44.204: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 22 12:55:44.228: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 22 12:55:44.263: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 22 12:55:44.263: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 22 12:55:44.263: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 22 12:55:44.270: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 22 12:55:44.270: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 22 12:55:44.270: INFO: e2e test version: v1.15.10 Mar 22 12:55:44.271: INFO: kube-apiserver version: v1.15.7 S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 12:55:44.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Mar 22 12:55:44.324: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-66d8c984-a3bf-4eb5-9d7c-fe6b0475d6a3 STEP: Creating a pod to test consume configMaps Mar 22 12:55:44.380: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-70fc802e-4302-4e89-883b-aca8dd15906c" in namespace "projected-7566" to be "success or failure" Mar 22 12:55:44.388: INFO: Pod "pod-projected-configmaps-70fc802e-4302-4e89-883b-aca8dd15906c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.805754ms Mar 22 12:55:46.392: INFO: Pod "pod-projected-configmaps-70fc802e-4302-4e89-883b-aca8dd15906c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011747972s Mar 22 12:55:48.396: INFO: Pod "pod-projected-configmaps-70fc802e-4302-4e89-883b-aca8dd15906c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015859312s STEP: Saw pod success Mar 22 12:55:48.396: INFO: Pod "pod-projected-configmaps-70fc802e-4302-4e89-883b-aca8dd15906c" satisfied condition "success or failure" Mar 22 12:55:48.400: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-70fc802e-4302-4e89-883b-aca8dd15906c container projected-configmap-volume-test: STEP: delete the pod Mar 22 12:55:48.435: INFO: Waiting for pod pod-projected-configmaps-70fc802e-4302-4e89-883b-aca8dd15906c to disappear Mar 22 12:55:48.445: INFO: Pod pod-projected-configmaps-70fc802e-4302-4e89-883b-aca8dd15906c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 12:55:48.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7566" for this suite. Mar 22 12:55:54.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 12:55:54.541: INFO: namespace projected-7566 deletion completed in 6.092897705s • [SLOW TEST:10.270 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 12:55:54.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 12:55:58.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1303" for this suite. Mar 22 12:56:04.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 12:56:04.804: INFO: namespace emptydir-wrapper-1303 deletion completed in 6.149042476s • [SLOW TEST:10.263 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 12:56:04.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 22 12:56:04.860: INFO: Waiting up to 5m0s for pod "pod-d69391c0-f664-4fd1-bdeb-5b99b8d4b200" in namespace "emptydir-3136" to be "success or failure" Mar 22 12:56:04.874: INFO: Pod "pod-d69391c0-f664-4fd1-bdeb-5b99b8d4b200": Phase="Pending", Reason="", readiness=false. Elapsed: 13.585645ms Mar 22 12:56:06.895: INFO: Pod "pod-d69391c0-f664-4fd1-bdeb-5b99b8d4b200": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034173353s Mar 22 12:56:08.899: INFO: Pod "pod-d69391c0-f664-4fd1-bdeb-5b99b8d4b200": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038908211s STEP: Saw pod success Mar 22 12:56:08.899: INFO: Pod "pod-d69391c0-f664-4fd1-bdeb-5b99b8d4b200" satisfied condition "success or failure" Mar 22 12:56:08.902: INFO: Trying to get logs from node iruya-worker pod pod-d69391c0-f664-4fd1-bdeb-5b99b8d4b200 container test-container: STEP: delete the pod Mar 22 12:56:08.962: INFO: Waiting for pod pod-d69391c0-f664-4fd1-bdeb-5b99b8d4b200 to disappear Mar 22 12:56:08.969: INFO: Pod pod-d69391c0-f664-4fd1-bdeb-5b99b8d4b200 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 12:56:08.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3136" for this suite. Mar 22 12:56:14.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 12:56:15.054: INFO: namespace emptydir-3136 deletion completed in 6.082359412s • [SLOW TEST:10.249 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 12:56:15.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 22 12:56:15.150: INFO: Waiting up to 5m0s for pod "pod-65152e5e-f035-42ee-83c7-ce9e81398d51" in namespace "emptydir-2543" to be "success or failure" Mar 22 12:56:15.182: INFO: Pod "pod-65152e5e-f035-42ee-83c7-ce9e81398d51": Phase="Pending", Reason="", readiness=false. Elapsed: 31.505053ms Mar 22 12:56:17.200: INFO: Pod "pod-65152e5e-f035-42ee-83c7-ce9e81398d51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049776933s Mar 22 12:56:19.205: INFO: Pod "pod-65152e5e-f035-42ee-83c7-ce9e81398d51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054789138s STEP: Saw pod success Mar 22 12:56:19.205: INFO: Pod "pod-65152e5e-f035-42ee-83c7-ce9e81398d51" satisfied condition "success or failure" Mar 22 12:56:19.208: INFO: Trying to get logs from node iruya-worker pod pod-65152e5e-f035-42ee-83c7-ce9e81398d51 container test-container: STEP: delete the pod Mar 22 12:56:19.228: INFO: Waiting for pod pod-65152e5e-f035-42ee-83c7-ce9e81398d51 to disappear Mar 22 12:56:19.272: INFO: Pod pod-65152e5e-f035-42ee-83c7-ce9e81398d51 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 12:56:19.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2543" for this suite. Mar 22 12:56:25.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 12:56:25.398: INFO: namespace emptydir-2543 deletion completed in 6.122533717s • [SLOW TEST:10.343 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 12:56:25.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 22 12:56:25.450: INFO: Waiting up to 5m0s for pod "pod-4a8094ba-05bb-4f5a-a466-836346518961" in namespace "emptydir-5149" to be "success or failure" Mar 22 12:56:25.466: INFO: Pod "pod-4a8094ba-05bb-4f5a-a466-836346518961": Phase="Pending", Reason="", readiness=false. Elapsed: 16.349788ms Mar 22 12:56:27.470: INFO: Pod "pod-4a8094ba-05bb-4f5a-a466-836346518961": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019771815s Mar 22 12:56:29.474: INFO: Pod "pod-4a8094ba-05bb-4f5a-a466-836346518961": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023837305s STEP: Saw pod success Mar 22 12:56:29.474: INFO: Pod "pod-4a8094ba-05bb-4f5a-a466-836346518961" satisfied condition "success or failure" Mar 22 12:56:29.476: INFO: Trying to get logs from node iruya-worker2 pod pod-4a8094ba-05bb-4f5a-a466-836346518961 container test-container: STEP: delete the pod Mar 22 12:56:29.516: INFO: Waiting for pod pod-4a8094ba-05bb-4f5a-a466-836346518961 to disappear Mar 22 12:56:29.530: INFO: Pod pod-4a8094ba-05bb-4f5a-a466-836346518961 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 12:56:29.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5149" for this suite. Mar 22 12:56:35.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 12:56:35.652: INFO: namespace emptydir-5149 deletion completed in 6.119175431s • [SLOW TEST:10.254 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 12:56:35.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 12:56:35.739: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 22 12:56:40.744: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 22 12:56:40.744: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 22 12:56:40.770: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-3525,SelfLink:/apis/apps/v1/namespaces/deployment-3525/deployments/test-cleanup-deployment,UID:44811c60-7d28-4cab-ad68-03a0c7ac9173,ResourceVersion:1232221,Generation:1,CreationTimestamp:2020-03-22 12:56:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Mar 22 12:56:40.776: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-3525,SelfLink:/apis/apps/v1/namespaces/deployment-3525/replicasets/test-cleanup-deployment-55bbcbc84c,UID:b3548771-20fc-4814-8fa1-532764020185,ResourceVersion:1232223,Generation:1,CreationTimestamp:2020-03-22 12:56:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 44811c60-7d28-4cab-ad68-03a0c7ac9173 0xc00283ae27 0xc00283ae28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 22 12:56:40.776: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 22 12:56:40.777: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-3525,SelfLink:/apis/apps/v1/namespaces/deployment-3525/replicasets/test-cleanup-controller,UID:b6bd8f7d-51ab-4789-b559-30e9fb58daa6,ResourceVersion:1232222,Generation:1,CreationTimestamp:2020-03-22 12:56:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 44811c60-7d28-4cab-ad68-03a0c7ac9173 0xc00283ad3f 0xc00283ad50}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 22 12:56:40.849: INFO: Pod "test-cleanup-controller-rcknj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-rcknj,GenerateName:test-cleanup-controller-,Namespace:deployment-3525,SelfLink:/api/v1/namespaces/deployment-3525/pods/test-cleanup-controller-rcknj,UID:b9975fae-2997-492f-82c6-e8def1aa982c,ResourceVersion:1232214,Generation:0,CreationTimestamp:2020-03-22 12:56:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller b6bd8f7d-51ab-4789-b559-30e9fb58daa6 0xc002cf5697 0xc002cf5698}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nmvhn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nmvhn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nmvhn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cf5710} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cf5730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 12:56:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 12:56:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 12:56:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 12:56:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.248,StartTime:2020-03-22 12:56:35 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-22 12:56:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f496c2c9a9d5f8ef29dea78b3f2ae01583e647a393bc69463feece59a5c3fa56}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 12:56:40.849: INFO: Pod "test-cleanup-deployment-55bbcbc84c-pnpdb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-pnpdb,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-3525,SelfLink:/api/v1/namespaces/deployment-3525/pods/test-cleanup-deployment-55bbcbc84c-pnpdb,UID:c202a726-0f2e-49cf-a7b1-e9cebe4b5fcf,ResourceVersion:1232229,Generation:0,CreationTimestamp:2020-03-22 12:56:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c b3548771-20fc-4814-8fa1-532764020185 0xc002cf5817 0xc002cf5818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nmvhn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nmvhn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-nmvhn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cf5890} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cf58b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 12:56:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 12:56:40.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3525" for this suite. Mar 22 12:56:46.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 12:56:46.987: INFO: namespace deployment-3525 deletion completed in 6.107907878s • [SLOW TEST:11.335 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 12:56:46.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-7229/secret-test-1960260f-656b-42fa-804c-994a6389c193 STEP: Creating a pod to test consume secrets Mar 22 12:56:47.066: INFO: Waiting up to 5m0s for pod "pod-configmaps-d208b09f-b3d9-421e-b7bd-d17cc72dcce2" in namespace "secrets-7229" to be "success or failure" Mar 22 12:56:47.078: INFO: Pod "pod-configmaps-d208b09f-b3d9-421e-b7bd-d17cc72dcce2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.973953ms Mar 22 12:56:49.083: INFO: Pod "pod-configmaps-d208b09f-b3d9-421e-b7bd-d17cc72dcce2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016458116s Mar 22 12:56:51.087: INFO: Pod "pod-configmaps-d208b09f-b3d9-421e-b7bd-d17cc72dcce2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020389443s STEP: Saw pod success Mar 22 12:56:51.087: INFO: Pod "pod-configmaps-d208b09f-b3d9-421e-b7bd-d17cc72dcce2" satisfied condition "success or failure" Mar 22 12:56:51.090: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-d208b09f-b3d9-421e-b7bd-d17cc72dcce2 container env-test: STEP: delete the pod Mar 22 12:56:51.109: INFO: Waiting for pod pod-configmaps-d208b09f-b3d9-421e-b7bd-d17cc72dcce2 to disappear Mar 22 12:56:51.114: INFO: Pod pod-configmaps-d208b09f-b3d9-421e-b7bd-d17cc72dcce2 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 12:56:51.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7229" for this suite. Mar 22 12:56:57.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 12:56:57.250: INFO: namespace secrets-7229 deletion completed in 6.132929865s • [SLOW TEST:10.263 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 12:56:57.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Mar 22 12:56:57.301: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Mar 22 12:56:57.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-904' Mar 22 12:56:59.657: INFO: stderr: "" Mar 22 12:56:59.657: INFO: stdout: "service/redis-slave created\n" Mar 22 12:56:59.657: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Mar 22 12:56:59.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-904' Mar 22 12:56:59.925: INFO: stderr: "" Mar 22 12:56:59.925: INFO: stdout: "service/redis-master created\n" Mar 22 12:56:59.925: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 22 12:56:59.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-904' Mar 22 12:57:00.220: INFO: stderr: "" Mar 22 12:57:00.220: INFO: stdout: "service/frontend created\n" Mar 22 12:57:00.221: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Mar 22 12:57:00.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-904' Mar 22 12:57:00.471: INFO: stderr: "" Mar 22 12:57:00.471: INFO: stdout: "deployment.apps/frontend created\n" Mar 22 12:57:00.471: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 22 12:57:00.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-904' Mar 22 12:57:00.789: INFO: stderr: "" Mar 22 12:57:00.789: INFO: stdout: "deployment.apps/redis-master created\n" Mar 22 12:57:00.789: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Mar 22 12:57:00.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-904' Mar 22 12:57:01.056: INFO: stderr: "" Mar 22 12:57:01.056: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Mar 22 12:57:01.056: INFO: Waiting for all frontend pods to be Running. Mar 22 12:57:11.106: INFO: Waiting for frontend to serve content. Mar 22 12:57:11.126: INFO: Trying to add a new entry to the guestbook. Mar 22 12:57:11.141: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 22 12:57:11.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-904' Mar 22 12:57:11.312: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 22 12:57:11.312: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Mar 22 12:57:11.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-904' Mar 22 12:57:11.489: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 22 12:57:11.489: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 22 12:57:11.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-904' Mar 22 12:57:11.609: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 22 12:57:11.609: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 22 12:57:11.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-904' Mar 22 12:57:11.705: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 22 12:57:11.705: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 22 12:57:11.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-904' Mar 22 12:57:11.836: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 22 12:57:11.836: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 22 12:57:11.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-904' Mar 22 12:57:11.944: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 22 12:57:11.944: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 12:57:11.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-904" for this suite. Mar 22 12:57:50.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 12:57:50.097: INFO: namespace kubectl-904 deletion completed in 38.106746189s • [SLOW TEST:52.847 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 12:57:50.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 22 12:57:50.174: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 22 12:57:50.184: INFO: Waiting for terminating namespaces to be deleted... Mar 22 12:57:50.187: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 22 12:57:50.191: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 22 12:57:50.191: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 12:57:50.191: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 22 12:57:50.191: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 12:57:50.191: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 22 12:57:50.196: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Mar 22 12:57:50.196: INFO: Container coredns ready: true, restart count 0 Mar 22 12:57:50.196: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Mar 22 12:57:50.196: INFO: Container coredns ready: true, restart count 0 Mar 22 12:57:50.197: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Mar 22 12:57:50.197: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 12:57:50.197: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Mar 22 12:57:50.197: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-134efded-85af-4ff8-878c-06e7e9cc6cb6 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-134efded-85af-4ff8-878c-06e7e9cc6cb6 off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-134efded-85af-4ff8-878c-06e7e9cc6cb6 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 12:57:58.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7168" for this suite. Mar 22 12:58:08.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 12:58:08.469: INFO: namespace sched-pred-7168 deletion completed in 10.123294937s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:18.372 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 12:58:08.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 22 12:58:08.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-2527' Mar 22 12:58:08.613: INFO: stderr: "" Mar 22 12:58:08.613: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Mar 22 12:58:13.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-2527 -o json' Mar 22 12:58:13.753: INFO: stderr: "" Mar 22 12:58:13.753: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-22T12:58:08Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-2527\",\n \"resourceVersion\": \"1232715\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2527/pods/e2e-test-nginx-pod\",\n \"uid\": \"669db382-e49f-4178-b8ab-66562088aab1\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-lp5lk\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-lp5lk\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-lp5lk\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-22T12:58:08Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-22T12:58:11Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-22T12:58:11Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-22T12:58:08Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://e81d3c21e524a7d2071556c9c46550b0e322555b223a040662258d763e0a085d\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-22T12:58:10Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.254\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-22T12:58:08Z\"\n }\n}\n" STEP: replace the image in the pod Mar 22 12:58:13.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2527' Mar 22 12:58:14.023: INFO: stderr: "" Mar 22 12:58:14.023: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Mar 22 12:58:14.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2527' Mar 22 12:58:22.182: INFO: stderr: "" Mar 22 12:58:22.182: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 12:58:22.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2527" for this suite. Mar 22 12:58:28.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 12:58:28.304: INFO: namespace kubectl-2527 deletion completed in 6.118834203s • [SLOW TEST:19.835 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 12:58:28.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-9979d54f-1a69-406b-b559-637008251436 STEP: Creating a pod to test consume secrets Mar 22 12:58:28.417: INFO: Waiting up to 5m0s for pod "pod-secrets-24932ed8-88e6-43b9-8626-fb6a22b1b3cf" in namespace "secrets-941" to be "success or failure" Mar 22 12:58:28.421: INFO: Pod "pod-secrets-24932ed8-88e6-43b9-8626-fb6a22b1b3cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397346ms Mar 22 12:58:30.425: INFO: Pod "pod-secrets-24932ed8-88e6-43b9-8626-fb6a22b1b3cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00792596s Mar 22 12:58:32.429: INFO: Pod "pod-secrets-24932ed8-88e6-43b9-8626-fb6a22b1b3cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012191265s STEP: Saw pod success Mar 22 12:58:32.429: INFO: Pod "pod-secrets-24932ed8-88e6-43b9-8626-fb6a22b1b3cf" satisfied condition "success or failure" Mar 22 12:58:32.432: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-24932ed8-88e6-43b9-8626-fb6a22b1b3cf container secret-volume-test: STEP: delete the pod Mar 22 12:58:32.453: INFO: Waiting for pod pod-secrets-24932ed8-88e6-43b9-8626-fb6a22b1b3cf to disappear Mar 22 12:58:32.458: INFO: Pod pod-secrets-24932ed8-88e6-43b9-8626-fb6a22b1b3cf no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 12:58:32.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-941" for this suite. Mar 22 12:58:38.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 12:58:38.584: INFO: namespace secrets-941 deletion completed in 6.122510671s • [SLOW TEST:10.279 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 12:58:38.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-c8d239ad-0278-4dcb-8910-8e91a1f0cae4 in namespace container-probe-6326 Mar 22 12:58:42.668: INFO: Started pod busybox-c8d239ad-0278-4dcb-8910-8e91a1f0cae4 in namespace container-probe-6326 STEP: checking the pod's current state and verifying that restartCount is present Mar 22 12:58:42.670: INFO: Initial restart count of pod busybox-c8d239ad-0278-4dcb-8910-8e91a1f0cae4 is 0 Mar 22 12:59:32.828: INFO: Restart count of pod container-probe-6326/busybox-c8d239ad-0278-4dcb-8910-8e91a1f0cae4 is now 1 (50.157514612s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 12:59:32.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6326" for this suite. Mar 22 12:59:38.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 12:59:38.947: INFO: namespace container-probe-6326 deletion completed in 6.098387969s • [SLOW TEST:60.363 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 12:59:38.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 12:59:57.061: INFO: Container started at 2020-03-22 12:59:41 +0000 UTC, pod became ready at 2020-03-22 12:59:56 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 12:59:57.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8970" for this suite. Mar 22 13:00:19.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:00:19.161: INFO: namespace container-probe-8970 deletion completed in 22.095056877s • [SLOW TEST:40.214 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:00:19.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1733.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1733.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1733.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1733.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1733.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1733.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1733.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1733.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1733.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1733.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1733.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 217.83.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.83.217_udp@PTR;check="$$(dig +tcp +noall +answer +search 217.83.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.83.217_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1733.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1733.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1733.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1733.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1733.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1733.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1733.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1733.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1733.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1733.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1733.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 217.83.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.83.217_udp@PTR;check="$$(dig +tcp +noall +answer +search 217.83.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.83.217_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 22 13:00:25.367: INFO: Unable to read wheezy_udp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:25.370: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:25.373: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:25.376: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:25.398: INFO: Unable to read jessie_udp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:25.402: INFO: Unable to read jessie_tcp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:25.405: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:25.408: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:25.429: INFO: Lookups using dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b failed for: [wheezy_udp@dns-test-service.dns-1733.svc.cluster.local wheezy_tcp@dns-test-service.dns-1733.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local jessie_udp@dns-test-service.dns-1733.svc.cluster.local jessie_tcp@dns-test-service.dns-1733.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local] Mar 22 13:00:30.434: INFO: Unable to read wheezy_udp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:30.438: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:30.441: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:30.445: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:30.468: INFO: Unable to read jessie_udp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:30.471: INFO: Unable to read jessie_tcp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:30.475: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:30.478: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:30.496: INFO: Lookups using dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b failed for: [wheezy_udp@dns-test-service.dns-1733.svc.cluster.local wheezy_tcp@dns-test-service.dns-1733.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local jessie_udp@dns-test-service.dns-1733.svc.cluster.local jessie_tcp@dns-test-service.dns-1733.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local] Mar 22 13:00:35.434: INFO: Unable to read wheezy_udp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:35.437: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:35.440: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:35.443: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:35.464: INFO: Unable to read jessie_udp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:35.467: INFO: Unable to read jessie_tcp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:35.470: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:35.473: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:35.492: INFO: Lookups using dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b failed for: [wheezy_udp@dns-test-service.dns-1733.svc.cluster.local wheezy_tcp@dns-test-service.dns-1733.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local jessie_udp@dns-test-service.dns-1733.svc.cluster.local jessie_tcp@dns-test-service.dns-1733.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local] Mar 22 13:00:40.434: INFO: Unable to read wheezy_udp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:40.438: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:40.441: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:40.443: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:40.465: INFO: Unable to read jessie_udp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:40.468: INFO: Unable to read jessie_tcp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:40.472: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:40.475: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:40.496: INFO: Lookups using dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b failed for: [wheezy_udp@dns-test-service.dns-1733.svc.cluster.local wheezy_tcp@dns-test-service.dns-1733.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local jessie_udp@dns-test-service.dns-1733.svc.cluster.local jessie_tcp@dns-test-service.dns-1733.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local] Mar 22 13:00:45.434: INFO: Unable to read wheezy_udp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:45.437: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:45.441: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:45.444: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:45.468: INFO: Unable to read jessie_udp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:45.471: INFO: Unable to read jessie_tcp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:45.474: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:45.477: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:45.495: INFO: Lookups using dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b failed for: [wheezy_udp@dns-test-service.dns-1733.svc.cluster.local wheezy_tcp@dns-test-service.dns-1733.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local jessie_udp@dns-test-service.dns-1733.svc.cluster.local jessie_tcp@dns-test-service.dns-1733.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local] Mar 22 13:00:50.434: INFO: Unable to read wheezy_udp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:50.438: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:50.442: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:50.445: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:50.468: INFO: Unable to read jessie_udp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:50.471: INFO: Unable to read jessie_tcp@dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:50.474: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:50.477: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local from pod dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b: the server could not find the requested resource (get pods dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b) Mar 22 13:00:50.496: INFO: Lookups using dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b failed for: [wheezy_udp@dns-test-service.dns-1733.svc.cluster.local wheezy_tcp@dns-test-service.dns-1733.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local jessie_udp@dns-test-service.dns-1733.svc.cluster.local jessie_tcp@dns-test-service.dns-1733.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1733.svc.cluster.local] Mar 22 13:00:55.493: INFO: DNS probes using dns-1733/dns-test-3a172321-bb2d-4f90-a6fb-74530b4a971b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:00:56.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1733" for this suite. Mar 22 13:01:02.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:01:02.201: INFO: namespace dns-1733 deletion completed in 6.147533777s • [SLOW TEST:43.040 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:01:02.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-bbbafd39-52ae-41ca-9af0-01d1c57305d5 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-bbbafd39-52ae-41ca-9af0-01d1c57305d5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:02:10.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2308" for this suite. Mar 22 13:02:32.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:02:32.712: INFO: namespace projected-2308 deletion completed in 22.092896274s • [SLOW TEST:90.510 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:02:32.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 22 13:02:32.799: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7168,SelfLink:/api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-a,UID:2d5ab06b-342f-4e2b-8f4a-528c26479802,ResourceVersion:1233421,Generation:0,CreationTimestamp:2020-03-22 13:02:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 22 13:02:32.799: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7168,SelfLink:/api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-a,UID:2d5ab06b-342f-4e2b-8f4a-528c26479802,ResourceVersion:1233421,Generation:0,CreationTimestamp:2020-03-22 13:02:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 22 13:02:42.807: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7168,SelfLink:/api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-a,UID:2d5ab06b-342f-4e2b-8f4a-528c26479802,ResourceVersion:1233441,Generation:0,CreationTimestamp:2020-03-22 13:02:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 22 13:02:42.807: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7168,SelfLink:/api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-a,UID:2d5ab06b-342f-4e2b-8f4a-528c26479802,ResourceVersion:1233441,Generation:0,CreationTimestamp:2020-03-22 13:02:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 22 13:02:52.818: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7168,SelfLink:/api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-a,UID:2d5ab06b-342f-4e2b-8f4a-528c26479802,ResourceVersion:1233463,Generation:0,CreationTimestamp:2020-03-22 13:02:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 22 13:02:52.819: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7168,SelfLink:/api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-a,UID:2d5ab06b-342f-4e2b-8f4a-528c26479802,ResourceVersion:1233463,Generation:0,CreationTimestamp:2020-03-22 13:02:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 22 13:03:02.826: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7168,SelfLink:/api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-a,UID:2d5ab06b-342f-4e2b-8f4a-528c26479802,ResourceVersion:1233483,Generation:0,CreationTimestamp:2020-03-22 13:02:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 22 13:03:02.826: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7168,SelfLink:/api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-a,UID:2d5ab06b-342f-4e2b-8f4a-528c26479802,ResourceVersion:1233483,Generation:0,CreationTimestamp:2020-03-22 13:02:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 22 13:03:12.833: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7168,SelfLink:/api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-b,UID:79535a4a-f9e3-4cbf-97e6-92adbfe29406,ResourceVersion:1233503,Generation:0,CreationTimestamp:2020-03-22 13:03:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 22 13:03:12.833: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7168,SelfLink:/api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-b,UID:79535a4a-f9e3-4cbf-97e6-92adbfe29406,ResourceVersion:1233503,Generation:0,CreationTimestamp:2020-03-22 13:03:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 22 13:03:22.840: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7168,SelfLink:/api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-b,UID:79535a4a-f9e3-4cbf-97e6-92adbfe29406,ResourceVersion:1233524,Generation:0,CreationTimestamp:2020-03-22 13:03:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 22 13:03:22.840: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7168,SelfLink:/api/v1/namespaces/watch-7168/configmaps/e2e-watch-test-configmap-b,UID:79535a4a-f9e3-4cbf-97e6-92adbfe29406,ResourceVersion:1233524,Generation:0,CreationTimestamp:2020-03-22 13:03:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:03:32.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7168" for this suite. Mar 22 13:03:38.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:03:38.946: INFO: namespace watch-7168 deletion completed in 6.100923838s • [SLOW TEST:66.235 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:03:38.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:03:43.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7059" for this suite. Mar 22 13:04:21.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:04:21.130: INFO: namespace kubelet-test-7059 deletion completed in 38.103401963s • [SLOW TEST:42.183 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:04:21.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-267291f9-18cb-4ba4-80b2-ea9ea1c7b434 in namespace container-probe-3261 Mar 22 13:04:25.252: INFO: Started pod busybox-267291f9-18cb-4ba4-80b2-ea9ea1c7b434 in namespace container-probe-3261 STEP: checking the pod's current state and verifying that restartCount is present Mar 22 13:04:25.255: INFO: Initial restart count of pod busybox-267291f9-18cb-4ba4-80b2-ea9ea1c7b434 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:08:25.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3261" for this suite. Mar 22 13:08:31.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:08:32.065: INFO: namespace container-probe-3261 deletion completed in 6.114489315s • [SLOW TEST:250.934 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:08:32.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Mar 22 13:08:32.129: INFO: Waiting up to 5m0s for pod "var-expansion-b5342caf-f354-4f25-a4d0-a79bb204af83" in namespace "var-expansion-218" to be "success or failure" Mar 22 13:08:32.175: INFO: Pod "var-expansion-b5342caf-f354-4f25-a4d0-a79bb204af83": Phase="Pending", Reason="", readiness=false. Elapsed: 46.299552ms Mar 22 13:08:34.181: INFO: Pod "var-expansion-b5342caf-f354-4f25-a4d0-a79bb204af83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05246473s Mar 22 13:08:36.185: INFO: Pod "var-expansion-b5342caf-f354-4f25-a4d0-a79bb204af83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056521103s STEP: Saw pod success Mar 22 13:08:36.185: INFO: Pod "var-expansion-b5342caf-f354-4f25-a4d0-a79bb204af83" satisfied condition "success or failure" Mar 22 13:08:36.188: INFO: Trying to get logs from node iruya-worker pod var-expansion-b5342caf-f354-4f25-a4d0-a79bb204af83 container dapi-container: STEP: delete the pod Mar 22 13:08:36.206: INFO: Waiting for pod var-expansion-b5342caf-f354-4f25-a4d0-a79bb204af83 to disappear Mar 22 13:08:36.211: INFO: Pod var-expansion-b5342caf-f354-4f25-a4d0-a79bb204af83 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:08:36.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-218" for this suite. Mar 22 13:08:42.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:08:42.297: INFO: namespace var-expansion-218 deletion completed in 6.083973259s • [SLOW TEST:10.232 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:08:42.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 22 13:08:45.435: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:08:45.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-936" for this suite. Mar 22 13:08:51.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:08:51.556: INFO: namespace container-runtime-936 deletion completed in 6.08450811s • [SLOW TEST:9.259 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:08:51.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-lgcw STEP: Creating a pod to test atomic-volume-subpath Mar 22 13:08:51.629: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-lgcw" in namespace "subpath-483" to be "success or failure" Mar 22 13:08:51.643: INFO: Pod "pod-subpath-test-downwardapi-lgcw": Phase="Pending", Reason="", readiness=false. Elapsed: 13.834484ms Mar 22 13:08:53.654: INFO: Pod "pod-subpath-test-downwardapi-lgcw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024287873s Mar 22 13:08:55.658: INFO: Pod "pod-subpath-test-downwardapi-lgcw": Phase="Running", Reason="", readiness=true. Elapsed: 4.02879266s Mar 22 13:08:57.663: INFO: Pod "pod-subpath-test-downwardapi-lgcw": Phase="Running", Reason="", readiness=true. Elapsed: 6.033376724s Mar 22 13:08:59.667: INFO: Pod "pod-subpath-test-downwardapi-lgcw": Phase="Running", Reason="", readiness=true. Elapsed: 8.037721445s Mar 22 13:09:01.672: INFO: Pod "pod-subpath-test-downwardapi-lgcw": Phase="Running", Reason="", readiness=true. Elapsed: 10.042104206s Mar 22 13:09:03.676: INFO: Pod "pod-subpath-test-downwardapi-lgcw": Phase="Running", Reason="", readiness=true. Elapsed: 12.046301729s Mar 22 13:09:05.680: INFO: Pod "pod-subpath-test-downwardapi-lgcw": Phase="Running", Reason="", readiness=true. Elapsed: 14.050683195s Mar 22 13:09:07.684: INFO: Pod "pod-subpath-test-downwardapi-lgcw": Phase="Running", Reason="", readiness=true. Elapsed: 16.054603513s Mar 22 13:09:09.688: INFO: Pod "pod-subpath-test-downwardapi-lgcw": Phase="Running", Reason="", readiness=true. Elapsed: 18.058845996s Mar 22 13:09:11.693: INFO: Pod "pod-subpath-test-downwardapi-lgcw": Phase="Running", Reason="", readiness=true. Elapsed: 20.063161809s Mar 22 13:09:13.697: INFO: Pod "pod-subpath-test-downwardapi-lgcw": Phase="Running", Reason="", readiness=true. Elapsed: 22.067041337s Mar 22 13:09:15.702: INFO: Pod "pod-subpath-test-downwardapi-lgcw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.072130367s STEP: Saw pod success Mar 22 13:09:15.702: INFO: Pod "pod-subpath-test-downwardapi-lgcw" satisfied condition "success or failure" Mar 22 13:09:15.705: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-lgcw container test-container-subpath-downwardapi-lgcw: STEP: delete the pod Mar 22 13:09:15.746: INFO: Waiting for pod pod-subpath-test-downwardapi-lgcw to disappear Mar 22 13:09:15.759: INFO: Pod pod-subpath-test-downwardapi-lgcw no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-lgcw Mar 22 13:09:15.759: INFO: Deleting pod "pod-subpath-test-downwardapi-lgcw" in namespace "subpath-483" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:09:15.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-483" for this suite. Mar 22 13:09:21.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:09:21.847: INFO: namespace subpath-483 deletion completed in 6.082499578s • [SLOW TEST:30.291 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:09:21.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-5d1c086c-4e7c-42bc-8fd8-8071b7e58247 STEP: Creating secret with name s-test-opt-upd-8d149988-5a51-465e-88ef-2cd256b66e0b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5d1c086c-4e7c-42bc-8fd8-8071b7e58247 STEP: Updating secret s-test-opt-upd-8d149988-5a51-465e-88ef-2cd256b66e0b STEP: Creating secret with name s-test-opt-create-231c64a6-086e-431c-a69a-04c54a41584c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:10:54.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4927" for this suite. Mar 22 13:11:16.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:11:16.535: INFO: namespace secrets-4927 deletion completed in 22.090695304s • [SLOW TEST:114.687 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:11:16.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-7075f603-d865-4648-9b2b-691ffc16ae50 STEP: Creating a pod to test consume configMaps Mar 22 13:11:16.613: INFO: Waiting up to 5m0s for pod "pod-configmaps-2f09fbce-34f5-4062-b1f5-eed6d097f736" in namespace "configmap-2955" to be "success or failure" Mar 22 13:11:16.656: INFO: Pod "pod-configmaps-2f09fbce-34f5-4062-b1f5-eed6d097f736": Phase="Pending", Reason="", readiness=false. Elapsed: 42.172622ms Mar 22 13:11:18.660: INFO: Pod "pod-configmaps-2f09fbce-34f5-4062-b1f5-eed6d097f736": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046117757s Mar 22 13:11:20.663: INFO: Pod "pod-configmaps-2f09fbce-34f5-4062-b1f5-eed6d097f736": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049902076s STEP: Saw pod success Mar 22 13:11:20.663: INFO: Pod "pod-configmaps-2f09fbce-34f5-4062-b1f5-eed6d097f736" satisfied condition "success or failure" Mar 22 13:11:20.667: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-2f09fbce-34f5-4062-b1f5-eed6d097f736 container configmap-volume-test: STEP: delete the pod Mar 22 13:11:20.686: INFO: Waiting for pod pod-configmaps-2f09fbce-34f5-4062-b1f5-eed6d097f736 to disappear Mar 22 13:11:20.690: INFO: Pod pod-configmaps-2f09fbce-34f5-4062-b1f5-eed6d097f736 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:11:20.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2955" for this suite. Mar 22 13:11:26.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:11:26.785: INFO: namespace configmap-2955 deletion completed in 6.090992963s • [SLOW TEST:10.249 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:11:26.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Mar 22 13:11:26.856: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix908152977/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:11:26.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5192" for this suite. Mar 22 13:11:32.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:11:33.015: INFO: namespace kubectl-5192 deletion completed in 6.088155292s • [SLOW TEST:6.230 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:11:33.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 22 13:11:33.095: INFO: Waiting up to 5m0s for pod "pod-be00d44b-b2d2-477e-b949-360e67f32c52" in namespace "emptydir-2717" to be "success or failure" Mar 22 13:11:33.100: INFO: Pod "pod-be00d44b-b2d2-477e-b949-360e67f32c52": Phase="Pending", Reason="", readiness=false. Elapsed: 5.251077ms Mar 22 13:11:35.105: INFO: Pod "pod-be00d44b-b2d2-477e-b949-360e67f32c52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009725281s Mar 22 13:11:37.109: INFO: Pod "pod-be00d44b-b2d2-477e-b949-360e67f32c52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014044959s STEP: Saw pod success Mar 22 13:11:37.109: INFO: Pod "pod-be00d44b-b2d2-477e-b949-360e67f32c52" satisfied condition "success or failure" Mar 22 13:11:37.112: INFO: Trying to get logs from node iruya-worker2 pod pod-be00d44b-b2d2-477e-b949-360e67f32c52 container test-container: STEP: delete the pod Mar 22 13:11:37.150: INFO: Waiting for pod pod-be00d44b-b2d2-477e-b949-360e67f32c52 to disappear Mar 22 13:11:37.167: INFO: Pod pod-be00d44b-b2d2-477e-b949-360e67f32c52 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:11:37.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2717" for this suite. Mar 22 13:11:43.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:11:43.266: INFO: namespace emptydir-2717 deletion completed in 6.096025939s • [SLOW TEST:10.251 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:11:43.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-73f61048-37d7-4394-951d-c009c548e3b9 STEP: Creating a pod to test consume configMaps Mar 22 13:11:43.330: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0dfabc75-7eee-4774-923f-7cbbd70ebff0" in namespace "projected-9101" to be "success or failure" Mar 22 13:11:43.335: INFO: Pod "pod-projected-configmaps-0dfabc75-7eee-4774-923f-7cbbd70ebff0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.404451ms Mar 22 13:11:45.340: INFO: Pod "pod-projected-configmaps-0dfabc75-7eee-4774-923f-7cbbd70ebff0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009311943s Mar 22 13:11:47.343: INFO: Pod "pod-projected-configmaps-0dfabc75-7eee-4774-923f-7cbbd70ebff0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012821171s STEP: Saw pod success Mar 22 13:11:47.343: INFO: Pod "pod-projected-configmaps-0dfabc75-7eee-4774-923f-7cbbd70ebff0" satisfied condition "success or failure" Mar 22 13:11:47.346: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-0dfabc75-7eee-4774-923f-7cbbd70ebff0 container projected-configmap-volume-test: STEP: delete the pod Mar 22 13:11:47.400: INFO: Waiting for pod pod-projected-configmaps-0dfabc75-7eee-4774-923f-7cbbd70ebff0 to disappear Mar 22 13:11:47.408: INFO: Pod pod-projected-configmaps-0dfabc75-7eee-4774-923f-7cbbd70ebff0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:11:47.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9101" for this suite. Mar 22 13:11:53.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:11:53.545: INFO: namespace projected-9101 deletion completed in 6.133469174s • [SLOW TEST:10.278 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:11:53.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-cdcfc86d-a1c9-4eed-9802-428764eaab2a STEP: Creating a pod to test consume configMaps Mar 22 13:11:53.632: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ecf7af4b-267c-4698-9a3d-189cca229641" in namespace "projected-870" to be "success or failure" Mar 22 13:11:53.676: INFO: Pod "pod-projected-configmaps-ecf7af4b-267c-4698-9a3d-189cca229641": Phase="Pending", Reason="", readiness=false. Elapsed: 44.320117ms Mar 22 13:11:55.680: INFO: Pod "pod-projected-configmaps-ecf7af4b-267c-4698-9a3d-189cca229641": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048133105s Mar 22 13:11:57.684: INFO: Pod "pod-projected-configmaps-ecf7af4b-267c-4698-9a3d-189cca229641": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051870528s STEP: Saw pod success Mar 22 13:11:57.684: INFO: Pod "pod-projected-configmaps-ecf7af4b-267c-4698-9a3d-189cca229641" satisfied condition "success or failure" Mar 22 13:11:57.687: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-ecf7af4b-267c-4698-9a3d-189cca229641 container projected-configmap-volume-test: STEP: delete the pod Mar 22 13:11:57.721: INFO: Waiting for pod pod-projected-configmaps-ecf7af4b-267c-4698-9a3d-189cca229641 to disappear Mar 22 13:11:57.739: INFO: Pod pod-projected-configmaps-ecf7af4b-267c-4698-9a3d-189cca229641 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:11:57.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-870" for this suite. Mar 22 13:12:03.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:12:03.839: INFO: namespace projected-870 deletion completed in 6.096699198s • [SLOW TEST:10.294 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:12:03.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-5631e509-e42a-437b-8835-0739fc292ab6 STEP: Creating a pod to test consume secrets Mar 22 13:12:03.896: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fb8ea32d-90ce-42f5-8490-f099b7927add" in namespace "projected-6065" to be "success or failure" Mar 22 13:12:03.909: INFO: Pod "pod-projected-secrets-fb8ea32d-90ce-42f5-8490-f099b7927add": Phase="Pending", Reason="", readiness=false. Elapsed: 13.335215ms Mar 22 13:12:05.914: INFO: Pod "pod-projected-secrets-fb8ea32d-90ce-42f5-8490-f099b7927add": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017572046s Mar 22 13:12:07.918: INFO: Pod "pod-projected-secrets-fb8ea32d-90ce-42f5-8490-f099b7927add": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022137059s STEP: Saw pod success Mar 22 13:12:07.918: INFO: Pod "pod-projected-secrets-fb8ea32d-90ce-42f5-8490-f099b7927add" satisfied condition "success or failure" Mar 22 13:12:07.921: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-fb8ea32d-90ce-42f5-8490-f099b7927add container projected-secret-volume-test: STEP: delete the pod Mar 22 13:12:07.952: INFO: Waiting for pod pod-projected-secrets-fb8ea32d-90ce-42f5-8490-f099b7927add to disappear Mar 22 13:12:07.964: INFO: Pod pod-projected-secrets-fb8ea32d-90ce-42f5-8490-f099b7927add no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:12:07.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6065" for this suite. Mar 22 13:12:13.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:12:14.062: INFO: namespace projected-6065 deletion completed in 6.09498162s • [SLOW TEST:10.222 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:12:14.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 22 13:12:14.128: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0667194d-dee1-46c1-9f4e-a3f9de69dd0f" in namespace "downward-api-8098" to be "success or failure" Mar 22 13:12:14.132: INFO: Pod "downwardapi-volume-0667194d-dee1-46c1-9f4e-a3f9de69dd0f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.443572ms Mar 22 13:12:16.136: INFO: Pod "downwardapi-volume-0667194d-dee1-46c1-9f4e-a3f9de69dd0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008119442s Mar 22 13:12:18.140: INFO: Pod "downwardapi-volume-0667194d-dee1-46c1-9f4e-a3f9de69dd0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01132508s STEP: Saw pod success Mar 22 13:12:18.140: INFO: Pod "downwardapi-volume-0667194d-dee1-46c1-9f4e-a3f9de69dd0f" satisfied condition "success or failure" Mar 22 13:12:18.184: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-0667194d-dee1-46c1-9f4e-a3f9de69dd0f container client-container: STEP: delete the pod Mar 22 13:12:18.206: INFO: Waiting for pod downwardapi-volume-0667194d-dee1-46c1-9f4e-a3f9de69dd0f to disappear Mar 22 13:12:18.210: INFO: Pod downwardapi-volume-0667194d-dee1-46c1-9f4e-a3f9de69dd0f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:12:18.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8098" for this suite. Mar 22 13:12:24.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:12:24.327: INFO: namespace downward-api-8098 deletion completed in 6.113599635s • [SLOW TEST:10.265 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:12:24.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 22 13:12:24.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4003' Mar 22 13:12:27.020: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 22 13:12:27.020: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Mar 22 13:12:29.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4003' Mar 22 13:12:29.170: INFO: stderr: "" Mar 22 13:12:29.170: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:12:29.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4003" for this suite. Mar 22 13:12:35.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:12:35.272: INFO: namespace kubectl-4003 deletion completed in 6.097987333s • [SLOW TEST:10.945 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:12:35.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-8679108a-e020-4957-89ae-5f0da7ab3108 STEP: Creating a pod to test consume secrets Mar 22 13:12:35.375: INFO: Waiting up to 5m0s for pod "pod-secrets-17d7f0f2-85b6-4d9d-8e78-dace45818366" in namespace "secrets-7196" to be "success or failure" Mar 22 13:12:35.417: INFO: Pod "pod-secrets-17d7f0f2-85b6-4d9d-8e78-dace45818366": Phase="Pending", Reason="", readiness=false. Elapsed: 42.218362ms Mar 22 13:12:37.421: INFO: Pod "pod-secrets-17d7f0f2-85b6-4d9d-8e78-dace45818366": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046343281s Mar 22 13:12:39.425: INFO: Pod "pod-secrets-17d7f0f2-85b6-4d9d-8e78-dace45818366": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050040993s STEP: Saw pod success Mar 22 13:12:39.425: INFO: Pod "pod-secrets-17d7f0f2-85b6-4d9d-8e78-dace45818366" satisfied condition "success or failure" Mar 22 13:12:39.428: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-17d7f0f2-85b6-4d9d-8e78-dace45818366 container secret-volume-test: STEP: delete the pod Mar 22 13:12:39.447: INFO: Waiting for pod pod-secrets-17d7f0f2-85b6-4d9d-8e78-dace45818366 to disappear Mar 22 13:12:39.451: INFO: Pod pod-secrets-17d7f0f2-85b6-4d9d-8e78-dace45818366 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:12:39.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7196" for this suite. Mar 22 13:12:45.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:12:45.591: INFO: namespace secrets-7196 deletion completed in 6.136821659s • [SLOW TEST:10.318 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:12:45.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 22 13:12:45.671: INFO: Waiting up to 5m0s for pod "pod-a4b260c1-3907-44db-b691-2737b5ee26f8" in namespace "emptydir-8727" to be "success or failure" Mar 22 13:12:45.678: INFO: Pod "pod-a4b260c1-3907-44db-b691-2737b5ee26f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.759348ms Mar 22 13:12:47.682: INFO: Pod "pod-a4b260c1-3907-44db-b691-2737b5ee26f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010880882s Mar 22 13:12:49.687: INFO: Pod "pod-a4b260c1-3907-44db-b691-2737b5ee26f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015353728s STEP: Saw pod success Mar 22 13:12:49.687: INFO: Pod "pod-a4b260c1-3907-44db-b691-2737b5ee26f8" satisfied condition "success or failure" Mar 22 13:12:49.690: INFO: Trying to get logs from node iruya-worker pod pod-a4b260c1-3907-44db-b691-2737b5ee26f8 container test-container: STEP: delete the pod Mar 22 13:12:50.252: INFO: Waiting for pod pod-a4b260c1-3907-44db-b691-2737b5ee26f8 to disappear Mar 22 13:12:50.270: INFO: Pod pod-a4b260c1-3907-44db-b691-2737b5ee26f8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:12:50.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8727" for this suite. Mar 22 13:12:56.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:12:56.377: INFO: namespace emptydir-8727 deletion completed in 6.103098221s • [SLOW TEST:10.786 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:12:56.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:12:56.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2231" for this suite. Mar 22 13:13:02.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:13:02.523: INFO: namespace services-2231 deletion completed in 6.087378023s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.146 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:13:02.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 22 13:13:02.593: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Mar 22 13:13:03.398: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 22 13:13:05.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720479583, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720479583, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720479583, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720479583, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 13:13:08.147: INFO: Waited 626.818421ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:13:08.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7346" for this suite. Mar 22 13:13:14.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:13:14.855: INFO: namespace aggregator-7346 deletion completed in 6.268445747s • [SLOW TEST:12.332 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:13:14.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 22 13:13:14.965: INFO: Waiting up to 5m0s for pod "downward-api-3a0b5b4e-5bd4-459d-a7bd-86c62207aa98" in namespace "downward-api-7082" to be "success or failure" Mar 22 13:13:14.967: INFO: Pod "downward-api-3a0b5b4e-5bd4-459d-a7bd-86c62207aa98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.327921ms Mar 22 13:13:16.972: INFO: Pod "downward-api-3a0b5b4e-5bd4-459d-a7bd-86c62207aa98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006579103s Mar 22 13:13:18.975: INFO: Pod "downward-api-3a0b5b4e-5bd4-459d-a7bd-86c62207aa98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010197684s STEP: Saw pod success Mar 22 13:13:18.975: INFO: Pod "downward-api-3a0b5b4e-5bd4-459d-a7bd-86c62207aa98" satisfied condition "success or failure" Mar 22 13:13:18.978: INFO: Trying to get logs from node iruya-worker pod downward-api-3a0b5b4e-5bd4-459d-a7bd-86c62207aa98 container dapi-container: STEP: delete the pod Mar 22 13:13:19.013: INFO: Waiting for pod downward-api-3a0b5b4e-5bd4-459d-a7bd-86c62207aa98 to disappear Mar 22 13:13:19.026: INFO: Pod downward-api-3a0b5b4e-5bd4-459d-a7bd-86c62207aa98 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:13:19.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7082" for this suite. Mar 22 13:13:25.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:13:25.134: INFO: namespace downward-api-7082 deletion completed in 6.105299023s • [SLOW TEST:10.278 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:13:25.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 22 13:13:25.190: INFO: Waiting up to 5m0s for pod "downwardapi-volume-36946096-fc87-40f0-9fbc-83e69d8c9c96" in namespace "projected-2914" to be "success or failure" Mar 22 13:13:25.346: INFO: Pod "downwardapi-volume-36946096-fc87-40f0-9fbc-83e69d8c9c96": Phase="Pending", Reason="", readiness=false. Elapsed: 156.331299ms Mar 22 13:13:27.350: INFO: Pod "downwardapi-volume-36946096-fc87-40f0-9fbc-83e69d8c9c96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160068898s Mar 22 13:13:29.355: INFO: Pod "downwardapi-volume-36946096-fc87-40f0-9fbc-83e69d8c9c96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.164595996s STEP: Saw pod success Mar 22 13:13:29.355: INFO: Pod "downwardapi-volume-36946096-fc87-40f0-9fbc-83e69d8c9c96" satisfied condition "success or failure" Mar 22 13:13:29.358: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-36946096-fc87-40f0-9fbc-83e69d8c9c96 container client-container: STEP: delete the pod Mar 22 13:13:29.386: INFO: Waiting for pod downwardapi-volume-36946096-fc87-40f0-9fbc-83e69d8c9c96 to disappear Mar 22 13:13:29.404: INFO: Pod downwardapi-volume-36946096-fc87-40f0-9fbc-83e69d8c9c96 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:13:29.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2914" for this suite. Mar 22 13:13:35.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:13:35.532: INFO: namespace projected-2914 deletion completed in 6.109596157s • [SLOW TEST:10.398 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:13:35.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:13:39.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6128" for this suite. Mar 22 13:14:25.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:14:25.781: INFO: namespace kubelet-test-6128 deletion completed in 46.103759164s • [SLOW TEST:50.248 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:14:25.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Mar 22 13:14:25.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1983' Mar 22 13:14:26.775: INFO: stderr: "" Mar 22 13:14:26.775: INFO: stdout: "pod/pause created\n" Mar 22 13:14:26.775: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 22 13:14:26.775: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1983" to be "running and ready" Mar 22 13:14:26.814: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 38.763441ms Mar 22 13:14:28.844: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068834513s Mar 22 13:14:30.856: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.080698771s Mar 22 13:14:30.856: INFO: Pod "pause" satisfied condition "running and ready" Mar 22 13:14:30.856: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Mar 22 13:14:30.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1983' Mar 22 13:14:30.946: INFO: stderr: "" Mar 22 13:14:30.946: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 22 13:14:30.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1983' Mar 22 13:14:31.043: INFO: stderr: "" Mar 22 13:14:31.043: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 22 13:14:31.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1983' Mar 22 13:14:31.155: INFO: stderr: "" Mar 22 13:14:31.155: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 22 13:14:31.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1983' Mar 22 13:14:31.251: INFO: stderr: "" Mar 22 13:14:31.252: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Mar 22 13:14:31.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1983' Mar 22 13:14:31.385: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 22 13:14:31.385: INFO: stdout: "pod \"pause\" force deleted\n" Mar 22 13:14:31.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1983' Mar 22 13:14:31.483: INFO: stderr: "No resources found.\n" Mar 22 13:14:31.483: INFO: stdout: "" Mar 22 13:14:31.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1983 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 22 13:14:31.664: INFO: stderr: "" Mar 22 13:14:31.664: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:14:31.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1983" for this suite. Mar 22 13:14:37.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:14:37.771: INFO: namespace kubectl-1983 deletion completed in 6.101611304s • [SLOW TEST:11.989 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:14:37.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:14:37.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2754" for this suite. Mar 22 13:14:59.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:15:00.000: INFO: namespace pods-2754 deletion completed in 22.114762703s • [SLOW TEST:22.227 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:15:00.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Mar 22 13:15:04.633: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-861 pod-service-account-2b7f8e4e-b2b5-43b5-babf-6cc8199474f9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 22 13:15:04.848: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-861 pod-service-account-2b7f8e4e-b2b5-43b5-babf-6cc8199474f9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 22 13:15:05.052: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-861 pod-service-account-2b7f8e4e-b2b5-43b5-babf-6cc8199474f9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:15:05.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-861" for this suite. Mar 22 13:15:11.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:15:11.356: INFO: namespace svcaccounts-861 deletion completed in 6.115035872s • [SLOW TEST:11.356 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:15:11.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-2564 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2564 STEP: Deleting pre-stop pod Mar 22 13:15:24.479: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:15:24.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2564" for this suite. Mar 22 13:16:02.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:16:02.604: INFO: namespace prestop-2564 deletion completed in 38.101724435s • [SLOW TEST:51.248 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:16:02.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 22 13:16:03.190: INFO: Pod name wrapped-volume-race-a831577c-933e-4645-9aa4-8331536534cf: Found 0 pods out of 5 Mar 22 13:16:08.198: INFO: Pod name wrapped-volume-race-a831577c-933e-4645-9aa4-8331536534cf: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a831577c-933e-4645-9aa4-8331536534cf in namespace emptydir-wrapper-8800, will wait for the garbage collector to delete the pods Mar 22 13:16:20.295: INFO: Deleting ReplicationController wrapped-volume-race-a831577c-933e-4645-9aa4-8331536534cf took: 21.537772ms Mar 22 13:16:20.596: INFO: Terminating ReplicationController wrapped-volume-race-a831577c-933e-4645-9aa4-8331536534cf pods took: 300.245686ms STEP: Creating RC which spawns configmap-volume pods Mar 22 13:17:02.269: INFO: Pod name wrapped-volume-race-6368b242-4273-412d-89d1-e5c8839c83ca: Found 0 pods out of 5 Mar 22 13:17:07.280: INFO: Pod name wrapped-volume-race-6368b242-4273-412d-89d1-e5c8839c83ca: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6368b242-4273-412d-89d1-e5c8839c83ca in namespace emptydir-wrapper-8800, will wait for the garbage collector to delete the pods Mar 22 13:17:21.376: INFO: Deleting ReplicationController wrapped-volume-race-6368b242-4273-412d-89d1-e5c8839c83ca took: 7.575248ms Mar 22 13:17:21.677: INFO: Terminating ReplicationController wrapped-volume-race-6368b242-4273-412d-89d1-e5c8839c83ca pods took: 300.297555ms STEP: Creating RC which spawns configmap-volume pods Mar 22 13:17:57.607: INFO: Pod name wrapped-volume-race-584a19fe-ede5-410e-9011-936c048e3223: Found 0 pods out of 5 Mar 22 13:18:02.614: INFO: Pod name wrapped-volume-race-584a19fe-ede5-410e-9011-936c048e3223: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-584a19fe-ede5-410e-9011-936c048e3223 in namespace emptydir-wrapper-8800, will wait for the garbage collector to delete the pods Mar 22 13:18:16.709: INFO: Deleting ReplicationController wrapped-volume-race-584a19fe-ede5-410e-9011-936c048e3223 took: 7.830763ms Mar 22 13:18:17.010: INFO: Terminating ReplicationController wrapped-volume-race-584a19fe-ede5-410e-9011-936c048e3223 pods took: 300.305828ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:19:02.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8800" for this suite. Mar 22 13:19:10.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:19:11.054: INFO: namespace emptydir-wrapper-8800 deletion completed in 8.098059035s • [SLOW TEST:188.449 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:19:11.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 22 13:19:11.119: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f8c9440-e78b-4145-b6c2-8f43db9270f5" in namespace "downward-api-9011" to be "success or failure" Mar 22 13:19:11.126: INFO: Pod "downwardapi-volume-7f8c9440-e78b-4145-b6c2-8f43db9270f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150524ms Mar 22 13:19:13.130: INFO: Pod "downwardapi-volume-7f8c9440-e78b-4145-b6c2-8f43db9270f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010493098s Mar 22 13:19:15.134: INFO: Pod "downwardapi-volume-7f8c9440-e78b-4145-b6c2-8f43db9270f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014402757s STEP: Saw pod success Mar 22 13:19:15.134: INFO: Pod "downwardapi-volume-7f8c9440-e78b-4145-b6c2-8f43db9270f5" satisfied condition "success or failure" Mar 22 13:19:15.137: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7f8c9440-e78b-4145-b6c2-8f43db9270f5 container client-container: STEP: delete the pod Mar 22 13:19:15.156: INFO: Waiting for pod downwardapi-volume-7f8c9440-e78b-4145-b6c2-8f43db9270f5 to disappear Mar 22 13:19:15.161: INFO: Pod downwardapi-volume-7f8c9440-e78b-4145-b6c2-8f43db9270f5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:19:15.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9011" for this suite. Mar 22 13:19:21.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:19:21.251: INFO: namespace downward-api-9011 deletion completed in 6.087363237s • [SLOW TEST:10.197 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:19:21.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-9ef0b6b8-7c57-4b53-a5a0-11338f957953 STEP: Creating configMap with name cm-test-opt-upd-bd5262c4-645b-4916-8b97-e99312dc2617 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9ef0b6b8-7c57-4b53-a5a0-11338f957953 STEP: Updating configmap cm-test-opt-upd-bd5262c4-645b-4916-8b97-e99312dc2617 STEP: Creating configMap with name cm-test-opt-create-46b6eeb9-fe07-41f8-92f9-2719961707e2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:19:29.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6561" for this suite. Mar 22 13:19:47.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:19:47.519: INFO: namespace projected-6561 deletion completed in 18.095033213s • [SLOW TEST:26.268 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:19:47.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:19:53.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-517" for this suite. Mar 22 13:19:59.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:19:59.325: INFO: namespace watch-517 deletion completed in 6.198947239s • [SLOW TEST:11.806 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:19:59.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-7a5954bb-97be-421b-a0ac-33e05fd3c828 STEP: Creating a pod to test consume secrets Mar 22 13:19:59.416: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6ba8d408-6441-4256-9ce7-cc4cb206e8b9" in namespace "projected-9879" to be "success or failure" Mar 22 13:19:59.420: INFO: Pod "pod-projected-secrets-6ba8d408-6441-4256-9ce7-cc4cb206e8b9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.406546ms Mar 22 13:20:01.424: INFO: Pod "pod-projected-secrets-6ba8d408-6441-4256-9ce7-cc4cb206e8b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007738911s Mar 22 13:20:03.442: INFO: Pod "pod-projected-secrets-6ba8d408-6441-4256-9ce7-cc4cb206e8b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025865541s STEP: Saw pod success Mar 22 13:20:03.442: INFO: Pod "pod-projected-secrets-6ba8d408-6441-4256-9ce7-cc4cb206e8b9" satisfied condition "success or failure" Mar 22 13:20:03.445: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-6ba8d408-6441-4256-9ce7-cc4cb206e8b9 container projected-secret-volume-test: STEP: delete the pod Mar 22 13:20:03.465: INFO: Waiting for pod pod-projected-secrets-6ba8d408-6441-4256-9ce7-cc4cb206e8b9 to disappear Mar 22 13:20:03.470: INFO: Pod pod-projected-secrets-6ba8d408-6441-4256-9ce7-cc4cb206e8b9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:20:03.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9879" for this suite. Mar 22 13:20:09.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:20:09.576: INFO: namespace projected-9879 deletion completed in 6.103987358s • [SLOW TEST:10.250 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:20:09.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 13:20:09.639: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.688518ms) Mar 22 13:20:09.642: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.556911ms) Mar 22 13:20:09.646: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.835777ms) Mar 22 13:20:09.670: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 24.342149ms) Mar 22 13:20:09.674: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.94955ms) Mar 22 13:20:09.678: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.078422ms) Mar 22 13:20:09.682: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.257422ms) Mar 22 13:20:09.685: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.195727ms) Mar 22 13:20:09.688: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.978294ms) Mar 22 13:20:09.691: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.357337ms) Mar 22 13:20:09.695: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.641029ms) Mar 22 13:20:09.699: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.718312ms) Mar 22 13:20:09.702: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.420404ms) Mar 22 13:20:09.706: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.616977ms) Mar 22 13:20:09.710: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.809043ms) Mar 22 13:20:09.713: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.10926ms) Mar 22 13:20:09.716: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.106095ms) Mar 22 13:20:09.719: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.985087ms) Mar 22 13:20:09.722: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.887732ms) Mar 22 13:20:09.726: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.570225ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:20:09.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5573" for this suite. Mar 22 13:20:15.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:20:15.823: INFO: namespace proxy-5573 deletion completed in 6.094247275s • [SLOW TEST:6.246 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:20:15.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 22 13:20:20.414: INFO: Successfully updated pod "pod-update-activedeadlineseconds-1bec6fb1-4e05-4cfb-b72b-f7dc758c42ab" Mar 22 13:20:20.414: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-1bec6fb1-4e05-4cfb-b72b-f7dc758c42ab" in namespace "pods-6304" to be "terminated due to deadline exceeded" Mar 22 13:20:20.423: INFO: Pod "pod-update-activedeadlineseconds-1bec6fb1-4e05-4cfb-b72b-f7dc758c42ab": Phase="Running", Reason="", readiness=true. Elapsed: 8.364315ms Mar 22 13:20:22.427: INFO: Pod "pod-update-activedeadlineseconds-1bec6fb1-4e05-4cfb-b72b-f7dc758c42ab": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.01275019s Mar 22 13:20:22.427: INFO: Pod "pod-update-activedeadlineseconds-1bec6fb1-4e05-4cfb-b72b-f7dc758c42ab" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:20:22.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6304" for this suite. Mar 22 13:20:28.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:20:28.528: INFO: namespace pods-6304 deletion completed in 6.096857841s • [SLOW TEST:12.705 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:20:28.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Mar 22 13:20:29.143: INFO: created pod pod-service-account-defaultsa Mar 22 13:20:29.144: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 22 13:20:29.166: INFO: created pod pod-service-account-mountsa Mar 22 13:20:29.166: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 22 13:20:29.181: INFO: created pod pod-service-account-nomountsa Mar 22 13:20:29.181: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 22 13:20:29.263: INFO: created pod pod-service-account-defaultsa-mountspec Mar 22 13:20:29.263: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 22 13:20:29.277: INFO: created pod pod-service-account-mountsa-mountspec Mar 22 13:20:29.277: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 22 13:20:29.299: INFO: created pod pod-service-account-nomountsa-mountspec Mar 22 13:20:29.299: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 22 13:20:29.312: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 22 13:20:29.312: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 22 13:20:29.354: INFO: created pod pod-service-account-mountsa-nomountspec Mar 22 13:20:29.354: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 22 13:20:29.395: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 22 13:20:29.395: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:20:29.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6703" for this suite. Mar 22 13:20:55.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:20:55.563: INFO: namespace svcaccounts-6703 deletion completed in 26.133741895s • [SLOW TEST:27.034 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:20:55.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Mar 22 13:20:55.613: INFO: namespace kubectl-5146 Mar 22 13:20:55.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5146' Mar 22 13:20:55.869: INFO: stderr: "" Mar 22 13:20:55.869: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 22 13:20:56.873: INFO: Selector matched 1 pods for map[app:redis] Mar 22 13:20:56.873: INFO: Found 0 / 1 Mar 22 13:20:57.940: INFO: Selector matched 1 pods for map[app:redis] Mar 22 13:20:57.940: INFO: Found 0 / 1 Mar 22 13:20:58.874: INFO: Selector matched 1 pods for map[app:redis] Mar 22 13:20:58.874: INFO: Found 0 / 1 Mar 22 13:20:59.874: INFO: Selector matched 1 pods for map[app:redis] Mar 22 13:20:59.874: INFO: Found 1 / 1 Mar 22 13:20:59.874: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 22 13:20:59.878: INFO: Selector matched 1 pods for map[app:redis] Mar 22 13:20:59.878: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 22 13:20:59.878: INFO: wait on redis-master startup in kubectl-5146 Mar 22 13:20:59.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mv2gm redis-master --namespace=kubectl-5146' Mar 22 13:20:59.985: INFO: stderr: "" Mar 22 13:20:59.985: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Mar 13:20:58.412 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Mar 13:20:58.412 # Server started, Redis version 3.2.12\n1:M 22 Mar 13:20:58.413 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Mar 13:20:58.413 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Mar 22 13:20:59.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5146' Mar 22 13:21:00.125: INFO: stderr: "" Mar 22 13:21:00.125: INFO: stdout: "service/rm2 exposed\n" Mar 22 13:21:00.130: INFO: Service rm2 in namespace kubectl-5146 found. STEP: exposing service Mar 22 13:21:02.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5146' Mar 22 13:21:02.263: INFO: stderr: "" Mar 22 13:21:02.263: INFO: stdout: "service/rm3 exposed\n" Mar 22 13:21:02.271: INFO: Service rm3 in namespace kubectl-5146 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:21:04.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5146" for this suite. Mar 22 13:21:26.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:21:26.368: INFO: namespace kubectl-5146 deletion completed in 22.085628914s • [SLOW TEST:30.805 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:21:26.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-5rqr STEP: Creating a pod to test atomic-volume-subpath Mar 22 13:21:26.631: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5rqr" in namespace "subpath-2460" to be "success or failure" Mar 22 13:21:26.634: INFO: Pod "pod-subpath-test-configmap-5rqr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.56623ms Mar 22 13:21:28.647: INFO: Pod "pod-subpath-test-configmap-5rqr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015142179s Mar 22 13:21:30.650: INFO: Pod "pod-subpath-test-configmap-5rqr": Phase="Running", Reason="", readiness=true. Elapsed: 4.018596476s Mar 22 13:21:32.654: INFO: Pod "pod-subpath-test-configmap-5rqr": Phase="Running", Reason="", readiness=true. Elapsed: 6.022355696s Mar 22 13:21:34.670: INFO: Pod "pod-subpath-test-configmap-5rqr": Phase="Running", Reason="", readiness=true. Elapsed: 8.039001594s Mar 22 13:21:36.675: INFO: Pod "pod-subpath-test-configmap-5rqr": Phase="Running", Reason="", readiness=true. Elapsed: 10.043228395s Mar 22 13:21:38.679: INFO: Pod "pod-subpath-test-configmap-5rqr": Phase="Running", Reason="", readiness=true. Elapsed: 12.047388973s Mar 22 13:21:40.683: INFO: Pod "pod-subpath-test-configmap-5rqr": Phase="Running", Reason="", readiness=true. Elapsed: 14.05123329s Mar 22 13:21:42.686: INFO: Pod "pod-subpath-test-configmap-5rqr": Phase="Running", Reason="", readiness=true. Elapsed: 16.054709159s Mar 22 13:21:44.690: INFO: Pod "pod-subpath-test-configmap-5rqr": Phase="Running", Reason="", readiness=true. Elapsed: 18.058362193s Mar 22 13:21:46.694: INFO: Pod "pod-subpath-test-configmap-5rqr": Phase="Running", Reason="", readiness=true. Elapsed: 20.062279334s Mar 22 13:21:48.698: INFO: Pod "pod-subpath-test-configmap-5rqr": Phase="Running", Reason="", readiness=true. Elapsed: 22.066377112s Mar 22 13:21:50.702: INFO: Pod "pod-subpath-test-configmap-5rqr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.070558369s STEP: Saw pod success Mar 22 13:21:50.702: INFO: Pod "pod-subpath-test-configmap-5rqr" satisfied condition "success or failure" Mar 22 13:21:50.725: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-5rqr container test-container-subpath-configmap-5rqr: STEP: delete the pod Mar 22 13:21:50.751: INFO: Waiting for pod pod-subpath-test-configmap-5rqr to disappear Mar 22 13:21:50.766: INFO: Pod pod-subpath-test-configmap-5rqr no longer exists STEP: Deleting pod pod-subpath-test-configmap-5rqr Mar 22 13:21:50.767: INFO: Deleting pod "pod-subpath-test-configmap-5rqr" in namespace "subpath-2460" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:21:50.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2460" for this suite. Mar 22 13:21:56.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:21:56.874: INFO: namespace subpath-2460 deletion completed in 6.102760661s • [SLOW TEST:30.505 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:21:56.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 22 13:21:56.979: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9714,SelfLink:/api/v1/namespaces/watch-9714/configmaps/e2e-watch-test-label-changed,UID:a4f5da1b-f4a2-45fe-a309-878cb3e75963,ResourceVersion:1237633,Generation:0,CreationTimestamp:2020-03-22 13:21:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 22 13:21:56.979: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9714,SelfLink:/api/v1/namespaces/watch-9714/configmaps/e2e-watch-test-label-changed,UID:a4f5da1b-f4a2-45fe-a309-878cb3e75963,ResourceVersion:1237634,Generation:0,CreationTimestamp:2020-03-22 13:21:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 22 13:21:56.979: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9714,SelfLink:/api/v1/namespaces/watch-9714/configmaps/e2e-watch-test-label-changed,UID:a4f5da1b-f4a2-45fe-a309-878cb3e75963,ResourceVersion:1237635,Generation:0,CreationTimestamp:2020-03-22 13:21:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 22 13:22:07.023: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9714,SelfLink:/api/v1/namespaces/watch-9714/configmaps/e2e-watch-test-label-changed,UID:a4f5da1b-f4a2-45fe-a309-878cb3e75963,ResourceVersion:1237656,Generation:0,CreationTimestamp:2020-03-22 13:21:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 22 13:22:07.023: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9714,SelfLink:/api/v1/namespaces/watch-9714/configmaps/e2e-watch-test-label-changed,UID:a4f5da1b-f4a2-45fe-a309-878cb3e75963,ResourceVersion:1237657,Generation:0,CreationTimestamp:2020-03-22 13:21:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 22 13:22:07.023: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9714,SelfLink:/api/v1/namespaces/watch-9714/configmaps/e2e-watch-test-label-changed,UID:a4f5da1b-f4a2-45fe-a309-878cb3e75963,ResourceVersion:1237658,Generation:0,CreationTimestamp:2020-03-22 13:21:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:22:07.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9714" for this suite. Mar 22 13:22:13.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:22:13.118: INFO: namespace watch-9714 deletion completed in 6.08449414s • [SLOW TEST:16.243 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:22:13.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:22:17.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9382" for this suite. Mar 22 13:22:23.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:22:23.291: INFO: namespace kubelet-test-9382 deletion completed in 6.095723033s • [SLOW TEST:10.173 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:22:23.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 13:22:23.445: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b2dc50b0-9849-490d-bf5e-bf8f2fd17441", Controller:(*bool)(0xc002bbfe6a), BlockOwnerDeletion:(*bool)(0xc002bbfe6b)}} Mar 22 13:22:23.504: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"3306d872-b70a-46a2-8aac-b8c92660647a", Controller:(*bool)(0xc0027908c2), BlockOwnerDeletion:(*bool)(0xc0027908c3)}} Mar 22 13:22:23.527: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ca581b09-7bf5-47ed-ac81-3ecab4edb837", Controller:(*bool)(0xc00278601a), BlockOwnerDeletion:(*bool)(0xc00278601b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:22:28.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6667" for this suite. Mar 22 13:22:34.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:22:34.658: INFO: namespace gc-6667 deletion completed in 6.102658936s • [SLOW TEST:11.367 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:22:34.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 22 13:22:34.740: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97c5ddb7-3cf3-4ec9-9985-75323d906b6d" in namespace "downward-api-8924" to be "success or failure" Mar 22 13:22:34.744: INFO: Pod "downwardapi-volume-97c5ddb7-3cf3-4ec9-9985-75323d906b6d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.337812ms Mar 22 13:22:36.752: INFO: Pod "downwardapi-volume-97c5ddb7-3cf3-4ec9-9985-75323d906b6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011514428s Mar 22 13:22:38.757: INFO: Pod "downwardapi-volume-97c5ddb7-3cf3-4ec9-9985-75323d906b6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016103981s STEP: Saw pod success Mar 22 13:22:38.757: INFO: Pod "downwardapi-volume-97c5ddb7-3cf3-4ec9-9985-75323d906b6d" satisfied condition "success or failure" Mar 22 13:22:38.760: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-97c5ddb7-3cf3-4ec9-9985-75323d906b6d container client-container: STEP: delete the pod Mar 22 13:22:38.789: INFO: Waiting for pod downwardapi-volume-97c5ddb7-3cf3-4ec9-9985-75323d906b6d to disappear Mar 22 13:22:38.800: INFO: Pod downwardapi-volume-97c5ddb7-3cf3-4ec9-9985-75323d906b6d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:22:38.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8924" for this suite. Mar 22 13:22:44.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:22:44.920: INFO: namespace downward-api-8924 deletion completed in 6.115998771s • [SLOW TEST:10.261 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:22:44.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-7870/configmap-test-7674c5ee-4890-4e01-809b-cfa4dd1d404d STEP: Creating a pod to test consume configMaps Mar 22 13:22:45.006: INFO: Waiting up to 5m0s for pod "pod-configmaps-627e98ec-d698-43e6-8e69-49fed2c67229" in namespace "configmap-7870" to be "success or failure" Mar 22 13:22:45.010: INFO: Pod "pod-configmaps-627e98ec-d698-43e6-8e69-49fed2c67229": Phase="Pending", Reason="", readiness=false. Elapsed: 3.85864ms Mar 22 13:22:47.014: INFO: Pod "pod-configmaps-627e98ec-d698-43e6-8e69-49fed2c67229": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008440232s Mar 22 13:22:49.019: INFO: Pod "pod-configmaps-627e98ec-d698-43e6-8e69-49fed2c67229": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012697904s STEP: Saw pod success Mar 22 13:22:49.019: INFO: Pod "pod-configmaps-627e98ec-d698-43e6-8e69-49fed2c67229" satisfied condition "success or failure" Mar 22 13:22:49.022: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-627e98ec-d698-43e6-8e69-49fed2c67229 container env-test: STEP: delete the pod Mar 22 13:22:49.047: INFO: Waiting for pod pod-configmaps-627e98ec-d698-43e6-8e69-49fed2c67229 to disappear Mar 22 13:22:49.051: INFO: Pod pod-configmaps-627e98ec-d698-43e6-8e69-49fed2c67229 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:22:49.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7870" for this suite. Mar 22 13:22:55.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:22:55.147: INFO: namespace configmap-7870 deletion completed in 6.092272693s • [SLOW TEST:10.227 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:22:55.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0322 13:23:05.238083 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 22 13:23:05.238: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:23:05.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3303" for this suite. Mar 22 13:23:11.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:23:11.369: INFO: namespace gc-3303 deletion completed in 6.098100952s • [SLOW TEST:16.222 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:23:11.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 22 13:23:11.429: INFO: Waiting up to 5m0s for pod "downwardapi-volume-45651e34-33d7-4c6b-95c3-64e866419535" in namespace "projected-7397" to be "success or failure" Mar 22 13:23:11.444: INFO: Pod "downwardapi-volume-45651e34-33d7-4c6b-95c3-64e866419535": Phase="Pending", Reason="", readiness=false. Elapsed: 15.310164ms Mar 22 13:23:13.448: INFO: Pod "downwardapi-volume-45651e34-33d7-4c6b-95c3-64e866419535": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019367813s Mar 22 13:23:15.452: INFO: Pod "downwardapi-volume-45651e34-33d7-4c6b-95c3-64e866419535": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023026407s STEP: Saw pod success Mar 22 13:23:15.452: INFO: Pod "downwardapi-volume-45651e34-33d7-4c6b-95c3-64e866419535" satisfied condition "success or failure" Mar 22 13:23:15.455: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-45651e34-33d7-4c6b-95c3-64e866419535 container client-container: STEP: delete the pod Mar 22 13:23:15.487: INFO: Waiting for pod downwardapi-volume-45651e34-33d7-4c6b-95c3-64e866419535 to disappear Mar 22 13:23:15.508: INFO: Pod downwardapi-volume-45651e34-33d7-4c6b-95c3-64e866419535 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:23:15.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7397" for this suite. Mar 22 13:23:21.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:23:21.625: INFO: namespace projected-7397 deletion completed in 6.095942357s • [SLOW TEST:10.256 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:23:21.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-7d920a37-00aa-4d6d-ae74-456b8254e949 STEP: Creating a pod to test consume configMaps Mar 22 13:23:21.689: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f903ee48-461b-432b-bf81-9605bc71cea9" in namespace "projected-7203" to be "success or failure" Mar 22 13:23:21.699: INFO: Pod "pod-projected-configmaps-f903ee48-461b-432b-bf81-9605bc71cea9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.282957ms Mar 22 13:23:23.704: INFO: Pod "pod-projected-configmaps-f903ee48-461b-432b-bf81-9605bc71cea9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015045486s Mar 22 13:23:25.708: INFO: Pod "pod-projected-configmaps-f903ee48-461b-432b-bf81-9605bc71cea9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019364637s STEP: Saw pod success Mar 22 13:23:25.708: INFO: Pod "pod-projected-configmaps-f903ee48-461b-432b-bf81-9605bc71cea9" satisfied condition "success or failure" Mar 22 13:23:25.711: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-f903ee48-461b-432b-bf81-9605bc71cea9 container projected-configmap-volume-test: STEP: delete the pod Mar 22 13:23:25.755: INFO: Waiting for pod pod-projected-configmaps-f903ee48-461b-432b-bf81-9605bc71cea9 to disappear Mar 22 13:23:25.813: INFO: Pod pod-projected-configmaps-f903ee48-461b-432b-bf81-9605bc71cea9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:23:25.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7203" for this suite. Mar 22 13:23:31.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:23:32.008: INFO: namespace projected-7203 deletion completed in 6.191802143s • [SLOW TEST:10.383 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:23:32.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-ac972957-324a-4ee6-8d83-b6f9e5d2297c [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:23:32.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-963" for this suite. Mar 22 13:23:38.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:23:38.157: INFO: namespace configmap-963 deletion completed in 6.091901956s • [SLOW TEST:6.149 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:23:38.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 22 13:23:38.221: INFO: Waiting up to 5m0s for pod "downward-api-c292a9a6-47c0-43f3-b49b-98ee79a79aaa" in namespace "downward-api-5636" to be "success or failure" Mar 22 13:23:38.224: INFO: Pod "downward-api-c292a9a6-47c0-43f3-b49b-98ee79a79aaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.793604ms Mar 22 13:23:40.228: INFO: Pod "downward-api-c292a9a6-47c0-43f3-b49b-98ee79a79aaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007207494s Mar 22 13:23:42.233: INFO: Pod "downward-api-c292a9a6-47c0-43f3-b49b-98ee79a79aaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011808644s STEP: Saw pod success Mar 22 13:23:42.233: INFO: Pod "downward-api-c292a9a6-47c0-43f3-b49b-98ee79a79aaa" satisfied condition "success or failure" Mar 22 13:23:42.236: INFO: Trying to get logs from node iruya-worker2 pod downward-api-c292a9a6-47c0-43f3-b49b-98ee79a79aaa container dapi-container: STEP: delete the pod Mar 22 13:23:42.274: INFO: Waiting for pod downward-api-c292a9a6-47c0-43f3-b49b-98ee79a79aaa to disappear Mar 22 13:23:42.281: INFO: Pod downward-api-c292a9a6-47c0-43f3-b49b-98ee79a79aaa no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:23:42.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5636" for this suite. Mar 22 13:23:48.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:23:48.385: INFO: namespace downward-api-5636 deletion completed in 6.099740878s • [SLOW TEST:10.228 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:23:48.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Mar 22 13:23:48.488: INFO: Waiting up to 5m0s for pod "var-expansion-f6fbc7f5-2340-402f-a498-e5e2801954f7" in namespace "var-expansion-2495" to be "success or failure" Mar 22 13:23:48.494: INFO: Pod "var-expansion-f6fbc7f5-2340-402f-a498-e5e2801954f7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.792499ms Mar 22 13:23:50.499: INFO: Pod "var-expansion-f6fbc7f5-2340-402f-a498-e5e2801954f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010840183s Mar 22 13:23:52.503: INFO: Pod "var-expansion-f6fbc7f5-2340-402f-a498-e5e2801954f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014754128s STEP: Saw pod success Mar 22 13:23:52.503: INFO: Pod "var-expansion-f6fbc7f5-2340-402f-a498-e5e2801954f7" satisfied condition "success or failure" Mar 22 13:23:52.506: INFO: Trying to get logs from node iruya-worker pod var-expansion-f6fbc7f5-2340-402f-a498-e5e2801954f7 container dapi-container: STEP: delete the pod Mar 22 13:23:52.548: INFO: Waiting for pod var-expansion-f6fbc7f5-2340-402f-a498-e5e2801954f7 to disappear Mar 22 13:23:52.567: INFO: Pod var-expansion-f6fbc7f5-2340-402f-a498-e5e2801954f7 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:23:52.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2495" for this suite. Mar 22 13:23:58.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:23:58.664: INFO: namespace var-expansion-2495 deletion completed in 6.092437594s • [SLOW TEST:10.278 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:23:58.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Mar 22 13:23:58.735: INFO: Waiting up to 5m0s for pod "var-expansion-0c9122eb-bc30-44f4-b905-c5baf64a9d73" in namespace "var-expansion-3402" to be "success or failure" Mar 22 13:23:58.765: INFO: Pod "var-expansion-0c9122eb-bc30-44f4-b905-c5baf64a9d73": Phase="Pending", Reason="", readiness=false. Elapsed: 29.808001ms Mar 22 13:24:00.768: INFO: Pod "var-expansion-0c9122eb-bc30-44f4-b905-c5baf64a9d73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033471259s Mar 22 13:24:02.773: INFO: Pod "var-expansion-0c9122eb-bc30-44f4-b905-c5baf64a9d73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037833992s STEP: Saw pod success Mar 22 13:24:02.773: INFO: Pod "var-expansion-0c9122eb-bc30-44f4-b905-c5baf64a9d73" satisfied condition "success or failure" Mar 22 13:24:02.776: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-0c9122eb-bc30-44f4-b905-c5baf64a9d73 container dapi-container: STEP: delete the pod Mar 22 13:24:02.791: INFO: Waiting for pod var-expansion-0c9122eb-bc30-44f4-b905-c5baf64a9d73 to disappear Mar 22 13:24:02.807: INFO: Pod var-expansion-0c9122eb-bc30-44f4-b905-c5baf64a9d73 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:24:02.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3402" for this suite. Mar 22 13:24:08.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:24:08.932: INFO: namespace var-expansion-3402 deletion completed in 6.121743729s • [SLOW TEST:10.269 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:24:08.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-3717 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3717 to expose endpoints map[] Mar 22 13:24:09.044: INFO: Get endpoints failed (32.386001ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 22 13:24:10.048: INFO: successfully validated that service multi-endpoint-test in namespace services-3717 exposes endpoints map[] (1.036676837s elapsed) STEP: Creating pod pod1 in namespace services-3717 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3717 to expose endpoints map[pod1:[100]] Mar 22 13:24:13.170: INFO: successfully validated that service multi-endpoint-test in namespace services-3717 exposes endpoints map[pod1:[100]] (3.114287573s elapsed) STEP: Creating pod pod2 in namespace services-3717 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3717 to expose endpoints map[pod1:[100] pod2:[101]] Mar 22 13:24:17.266: INFO: successfully validated that service multi-endpoint-test in namespace services-3717 exposes endpoints map[pod1:[100] pod2:[101]] (4.091832447s elapsed) STEP: Deleting pod pod1 in namespace services-3717 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3717 to expose endpoints map[pod2:[101]] Mar 22 13:24:18.306: INFO: successfully validated that service multi-endpoint-test in namespace services-3717 exposes endpoints map[pod2:[101]] (1.034295643s elapsed) STEP: Deleting pod pod2 in namespace services-3717 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3717 to expose endpoints map[] Mar 22 13:24:18.375: INFO: successfully validated that service multi-endpoint-test in namespace services-3717 exposes endpoints map[] (64.199368ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:24:18.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3717" for this suite. Mar 22 13:24:24.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:24:24.491: INFO: namespace services-3717 deletion completed in 6.085722472s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:15.558 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:24:24.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 22 13:24:27.646: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:24:27.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6163" for this suite. Mar 22 13:24:33.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:24:33.869: INFO: namespace container-runtime-6163 deletion completed in 6.092813383s • [SLOW TEST:9.377 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:24:33.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 22 13:24:41.980: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 13:24:42.002: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 13:24:44.002: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 13:24:44.007: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 13:24:46.003: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 13:24:46.008: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 13:24:48.003: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 13:24:48.007: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 13:24:50.003: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 13:24:50.007: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 13:24:52.003: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 13:24:52.007: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 13:24:54.002: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 13:24:54.006: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 13:24:56.003: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 13:24:56.007: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 13:24:58.002: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 13:24:58.007: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 13:25:00.003: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 13:25:00.007: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 13:25:02.002: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 13:25:02.005: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 13:25:04.003: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 13:25:04.006: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:25:04.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8364" for this suite. Mar 22 13:25:26.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:25:26.120: INFO: namespace container-lifecycle-hook-8364 deletion completed in 22.109850651s • [SLOW TEST:52.251 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:25:26.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 22 13:25:26.190: INFO: Waiting up to 5m0s for pod "pod-d2eca0d7-a507-4c38-a463-a5ab2a45cf2e" in namespace "emptydir-982" to be "success or failure" Mar 22 13:25:26.221: INFO: Pod "pod-d2eca0d7-a507-4c38-a463-a5ab2a45cf2e": Phase="Pending", Reason="", readiness=false. Elapsed: 30.063523ms Mar 22 13:25:28.226: INFO: Pod "pod-d2eca0d7-a507-4c38-a463-a5ab2a45cf2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035184241s Mar 22 13:25:30.230: INFO: Pod "pod-d2eca0d7-a507-4c38-a463-a5ab2a45cf2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039583926s STEP: Saw pod success Mar 22 13:25:30.230: INFO: Pod "pod-d2eca0d7-a507-4c38-a463-a5ab2a45cf2e" satisfied condition "success or failure" Mar 22 13:25:30.233: INFO: Trying to get logs from node iruya-worker2 pod pod-d2eca0d7-a507-4c38-a463-a5ab2a45cf2e container test-container: STEP: delete the pod Mar 22 13:25:30.254: INFO: Waiting for pod pod-d2eca0d7-a507-4c38-a463-a5ab2a45cf2e to disappear Mar 22 13:25:30.258: INFO: Pod pod-d2eca0d7-a507-4c38-a463-a5ab2a45cf2e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:25:30.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-982" for this suite. Mar 22 13:25:36.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:25:36.349: INFO: namespace emptydir-982 deletion completed in 6.088754096s • [SLOW TEST:10.229 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:25:36.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 22 13:25:43.470: INFO: 0 pods remaining Mar 22 13:25:43.470: INFO: 0 pods has nil DeletionTimestamp Mar 22 13:25:43.470: INFO: Mar 22 13:25:44.168: INFO: 0 pods remaining Mar 22 13:25:44.168: INFO: 0 pods has nil DeletionTimestamp Mar 22 13:25:44.168: INFO: STEP: Gathering metrics W0322 13:25:44.606874 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 22 13:25:44.606: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:25:44.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9126" for this suite. Mar 22 13:25:50.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:25:50.722: INFO: namespace gc-9126 deletion completed in 6.112098822s • [SLOW TEST:14.372 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:25:50.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-5188 I0322 13:25:50.806002 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5188, replica count: 1 I0322 13:25:51.856615 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 13:25:52.857499 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 13:25:53.857717 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 22 13:25:53.984: INFO: Created: latency-svc-j9tjw Mar 22 13:25:54.021: INFO: Got endpoints: latency-svc-j9tjw [64.0082ms] Mar 22 13:25:54.048: INFO: Created: latency-svc-9sxn7 Mar 22 13:25:54.061: INFO: Got endpoints: latency-svc-9sxn7 [39.335249ms] Mar 22 13:25:54.078: INFO: Created: latency-svc-9v78b Mar 22 13:25:54.092: INFO: Got endpoints: latency-svc-9v78b [70.420338ms] Mar 22 13:25:54.108: INFO: Created: latency-svc-jzscl Mar 22 13:25:54.165: INFO: Got endpoints: latency-svc-jzscl [143.404986ms] Mar 22 13:25:54.167: INFO: Created: latency-svc-dzvct Mar 22 13:25:54.175: INFO: Got endpoints: latency-svc-dzvct [153.396918ms] Mar 22 13:25:54.195: INFO: Created: latency-svc-c2zl8 Mar 22 13:25:54.206: INFO: Got endpoints: latency-svc-c2zl8 [184.479907ms] Mar 22 13:25:54.224: INFO: Created: latency-svc-q45z8 Mar 22 13:25:54.236: INFO: Got endpoints: latency-svc-q45z8 [214.565092ms] Mar 22 13:25:54.263: INFO: Created: latency-svc-nhx88 Mar 22 13:25:54.315: INFO: Got endpoints: latency-svc-nhx88 [292.924339ms] Mar 22 13:25:54.330: INFO: Created: latency-svc-m89qh Mar 22 13:25:54.344: INFO: Got endpoints: latency-svc-m89qh [322.474922ms] Mar 22 13:25:54.370: INFO: Created: latency-svc-hlg7p Mar 22 13:25:54.380: INFO: Got endpoints: latency-svc-hlg7p [358.464542ms] Mar 22 13:25:54.398: INFO: Created: latency-svc-jmfp5 Mar 22 13:25:54.410: INFO: Got endpoints: latency-svc-jmfp5 [388.69337ms] Mar 22 13:25:54.458: INFO: Created: latency-svc-ndhbl Mar 22 13:25:54.471: INFO: Got endpoints: latency-svc-ndhbl [449.507184ms] Mar 22 13:25:54.485: INFO: Created: latency-svc-hjsns Mar 22 13:25:54.501: INFO: Got endpoints: latency-svc-hjsns [479.635984ms] Mar 22 13:25:54.522: INFO: Created: latency-svc-xsp79 Mar 22 13:25:54.578: INFO: Got endpoints: latency-svc-xsp79 [556.70611ms] Mar 22 13:25:54.586: INFO: Created: latency-svc-mz9fd Mar 22 13:25:54.620: INFO: Got endpoints: latency-svc-mz9fd [598.093595ms] Mar 22 13:25:54.621: INFO: Created: latency-svc-zsdpb Mar 22 13:25:54.647: INFO: Got endpoints: latency-svc-zsdpb [625.141772ms] Mar 22 13:25:54.741: INFO: Created: latency-svc-pbbjp Mar 22 13:25:54.744: INFO: Got endpoints: latency-svc-pbbjp [682.885928ms] Mar 22 13:25:54.806: INFO: Created: latency-svc-zkvkp Mar 22 13:25:54.827: INFO: Got endpoints: latency-svc-zkvkp [734.796752ms] Mar 22 13:25:54.884: INFO: Created: latency-svc-jsjtc Mar 22 13:25:54.887: INFO: Got endpoints: latency-svc-jsjtc [721.902431ms] Mar 22 13:25:54.912: INFO: Created: latency-svc-t5n76 Mar 22 13:25:54.923: INFO: Got endpoints: latency-svc-t5n76 [748.076127ms] Mar 22 13:25:54.955: INFO: Created: latency-svc-6dx95 Mar 22 13:25:54.979: INFO: Got endpoints: latency-svc-6dx95 [773.036421ms] Mar 22 13:25:55.046: INFO: Created: latency-svc-g56rc Mar 22 13:25:55.058: INFO: Got endpoints: latency-svc-g56rc [821.905468ms] Mar 22 13:25:55.088: INFO: Created: latency-svc-zcnst Mar 22 13:25:55.112: INFO: Got endpoints: latency-svc-zcnst [796.843927ms] Mar 22 13:25:55.133: INFO: Created: latency-svc-crpx7 Mar 22 13:25:55.207: INFO: Got endpoints: latency-svc-crpx7 [863.041648ms] Mar 22 13:25:55.209: INFO: Created: latency-svc-7cnb8 Mar 22 13:25:55.212: INFO: Got endpoints: latency-svc-7cnb8 [831.411342ms] Mar 22 13:25:55.256: INFO: Created: latency-svc-55lbt Mar 22 13:25:55.266: INFO: Got endpoints: latency-svc-55lbt [855.961725ms] Mar 22 13:25:55.290: INFO: Created: latency-svc-rpmjj Mar 22 13:25:55.303: INFO: Got endpoints: latency-svc-rpmjj [831.503565ms] Mar 22 13:25:55.339: INFO: Created: latency-svc-kqdpq Mar 22 13:25:55.362: INFO: Got endpoints: latency-svc-kqdpq [860.545527ms] Mar 22 13:25:55.362: INFO: Created: latency-svc-4xxhb Mar 22 13:25:55.375: INFO: Got endpoints: latency-svc-4xxhb [796.95623ms] Mar 22 13:25:55.394: INFO: Created: latency-svc-fppsf Mar 22 13:25:55.406: INFO: Got endpoints: latency-svc-fppsf [785.97334ms] Mar 22 13:25:55.424: INFO: Created: latency-svc-8k4fd Mar 22 13:25:55.436: INFO: Got endpoints: latency-svc-8k4fd [789.091295ms] Mar 22 13:25:55.489: INFO: Created: latency-svc-n5rlr Mar 22 13:25:55.492: INFO: Got endpoints: latency-svc-n5rlr [748.260435ms] Mar 22 13:25:55.542: INFO: Created: latency-svc-bkw6v Mar 22 13:25:55.557: INFO: Got endpoints: latency-svc-bkw6v [729.870766ms] Mar 22 13:25:55.578: INFO: Created: latency-svc-jgmzg Mar 22 13:25:55.638: INFO: Got endpoints: latency-svc-jgmzg [751.331303ms] Mar 22 13:25:55.658: INFO: Created: latency-svc-j8ppf Mar 22 13:25:55.682: INFO: Got endpoints: latency-svc-j8ppf [758.722195ms] Mar 22 13:25:55.710: INFO: Created: latency-svc-2bdsr Mar 22 13:25:55.725: INFO: Got endpoints: latency-svc-2bdsr [745.904349ms] Mar 22 13:25:55.776: INFO: Created: latency-svc-frv4p Mar 22 13:25:55.779: INFO: Got endpoints: latency-svc-frv4p [721.098467ms] Mar 22 13:25:55.826: INFO: Created: latency-svc-8bdzx Mar 22 13:25:55.841: INFO: Got endpoints: latency-svc-8bdzx [729.557082ms] Mar 22 13:25:55.862: INFO: Created: latency-svc-r9h82 Mar 22 13:25:55.870: INFO: Got endpoints: latency-svc-r9h82 [662.666839ms] Mar 22 13:25:55.914: INFO: Created: latency-svc-gnktv Mar 22 13:25:55.930: INFO: Got endpoints: latency-svc-gnktv [718.634331ms] Mar 22 13:25:55.968: INFO: Created: latency-svc-6s7d2 Mar 22 13:25:55.985: INFO: Got endpoints: latency-svc-6s7d2 [718.369123ms] Mar 22 13:25:56.012: INFO: Created: latency-svc-m2s7v Mar 22 13:25:56.051: INFO: Got endpoints: latency-svc-m2s7v [748.242261ms] Mar 22 13:25:56.066: INFO: Created: latency-svc-h6s7d Mar 22 13:25:56.094: INFO: Got endpoints: latency-svc-h6s7d [732.313153ms] Mar 22 13:25:56.114: INFO: Created: latency-svc-lc7zl Mar 22 13:25:56.123: INFO: Got endpoints: latency-svc-lc7zl [747.817742ms] Mar 22 13:25:56.142: INFO: Created: latency-svc-jwmsg Mar 22 13:25:56.183: INFO: Got endpoints: latency-svc-jwmsg [777.110674ms] Mar 22 13:25:56.189: INFO: Created: latency-svc-c4hpf Mar 22 13:25:56.202: INFO: Got endpoints: latency-svc-c4hpf [765.86838ms] Mar 22 13:25:56.222: INFO: Created: latency-svc-pxjx4 Mar 22 13:25:56.238: INFO: Got endpoints: latency-svc-pxjx4 [746.180742ms] Mar 22 13:25:56.258: INFO: Created: latency-svc-dtl5x Mar 22 13:25:56.275: INFO: Got endpoints: latency-svc-dtl5x [718.433989ms] Mar 22 13:25:56.340: INFO: Created: latency-svc-lhqcn Mar 22 13:25:56.342: INFO: Got endpoints: latency-svc-lhqcn [703.653779ms] Mar 22 13:25:56.390: INFO: Created: latency-svc-w8lsp Mar 22 13:25:56.401: INFO: Got endpoints: latency-svc-w8lsp [719.319182ms] Mar 22 13:25:56.420: INFO: Created: latency-svc-69g8h Mar 22 13:25:56.434: INFO: Got endpoints: latency-svc-69g8h [708.174504ms] Mar 22 13:25:56.489: INFO: Created: latency-svc-mhhkb Mar 22 13:25:56.491: INFO: Got endpoints: latency-svc-mhhkb [711.92495ms] Mar 22 13:25:56.520: INFO: Created: latency-svc-8dlnh Mar 22 13:25:56.534: INFO: Got endpoints: latency-svc-8dlnh [692.950281ms] Mar 22 13:25:56.555: INFO: Created: latency-svc-j72h7 Mar 22 13:25:56.570: INFO: Got endpoints: latency-svc-j72h7 [700.329984ms] Mar 22 13:25:56.627: INFO: Created: latency-svc-t6l4n Mar 22 13:25:56.630: INFO: Got endpoints: latency-svc-t6l4n [700.01603ms] Mar 22 13:25:56.648: INFO: Created: latency-svc-fs656 Mar 22 13:25:56.661: INFO: Got endpoints: latency-svc-fs656 [676.192529ms] Mar 22 13:25:56.690: INFO: Created: latency-svc-vp9zd Mar 22 13:25:56.703: INFO: Got endpoints: latency-svc-vp9zd [652.097776ms] Mar 22 13:25:56.764: INFO: Created: latency-svc-nrncv Mar 22 13:25:56.783: INFO: Got endpoints: latency-svc-nrncv [688.855012ms] Mar 22 13:25:56.814: INFO: Created: latency-svc-kvpxt Mar 22 13:25:56.840: INFO: Got endpoints: latency-svc-kvpxt [716.200654ms] Mar 22 13:25:56.902: INFO: Created: latency-svc-r626z Mar 22 13:25:56.905: INFO: Got endpoints: latency-svc-r626z [722.006263ms] Mar 22 13:25:56.930: INFO: Created: latency-svc-hplsg Mar 22 13:25:56.938: INFO: Got endpoints: latency-svc-hplsg [736.166981ms] Mar 22 13:25:56.963: INFO: Created: latency-svc-pkssp Mar 22 13:25:56.974: INFO: Got endpoints: latency-svc-pkssp [735.777065ms] Mar 22 13:25:57.046: INFO: Created: latency-svc-pgqsh Mar 22 13:25:57.050: INFO: Got endpoints: latency-svc-pgqsh [774.225824ms] Mar 22 13:25:57.074: INFO: Created: latency-svc-kqfgn Mar 22 13:25:57.089: INFO: Got endpoints: latency-svc-kqfgn [746.975817ms] Mar 22 13:25:57.109: INFO: Created: latency-svc-vjkzz Mar 22 13:25:57.132: INFO: Got endpoints: latency-svc-vjkzz [730.257155ms] Mar 22 13:25:57.183: INFO: Created: latency-svc-wprb2 Mar 22 13:25:57.187: INFO: Got endpoints: latency-svc-wprb2 [753.018298ms] Mar 22 13:25:57.215: INFO: Created: latency-svc-j7bc6 Mar 22 13:25:57.227: INFO: Got endpoints: latency-svc-j7bc6 [736.037736ms] Mar 22 13:25:57.264: INFO: Created: latency-svc-j9swt Mar 22 13:25:57.276: INFO: Got endpoints: latency-svc-j9swt [741.91975ms] Mar 22 13:25:57.327: INFO: Created: latency-svc-4rjht Mar 22 13:25:57.330: INFO: Got endpoints: latency-svc-4rjht [759.789649ms] Mar 22 13:25:57.380: INFO: Created: latency-svc-4sn4h Mar 22 13:25:57.401: INFO: Got endpoints: latency-svc-4sn4h [770.654319ms] Mar 22 13:25:57.466: INFO: Created: latency-svc-5dr6d Mar 22 13:25:57.474: INFO: Got endpoints: latency-svc-5dr6d [812.593605ms] Mar 22 13:25:57.500: INFO: Created: latency-svc-dwnwz Mar 22 13:25:57.511: INFO: Got endpoints: latency-svc-dwnwz [807.714232ms] Mar 22 13:25:57.530: INFO: Created: latency-svc-cjxmd Mar 22 13:25:57.541: INFO: Got endpoints: latency-svc-cjxmd [758.131141ms] Mar 22 13:25:57.560: INFO: Created: latency-svc-nx2zm Mar 22 13:25:57.608: INFO: Got endpoints: latency-svc-nx2zm [768.517764ms] Mar 22 13:25:57.618: INFO: Created: latency-svc-4c7kc Mar 22 13:25:57.632: INFO: Got endpoints: latency-svc-4c7kc [727.090191ms] Mar 22 13:25:57.655: INFO: Created: latency-svc-rkpgz Mar 22 13:25:57.668: INFO: Got endpoints: latency-svc-rkpgz [730.126094ms] Mar 22 13:25:57.692: INFO: Created: latency-svc-j4f2r Mar 22 13:25:57.704: INFO: Got endpoints: latency-svc-j4f2r [730.052796ms] Mar 22 13:25:57.753: INFO: Created: latency-svc-v9vjz Mar 22 13:25:57.755: INFO: Got endpoints: latency-svc-v9vjz [705.622262ms] Mar 22 13:25:57.783: INFO: Created: latency-svc-mflhb Mar 22 13:25:57.795: INFO: Got endpoints: latency-svc-mflhb [705.717258ms] Mar 22 13:25:57.834: INFO: Created: latency-svc-wtmw6 Mar 22 13:25:57.878: INFO: Got endpoints: latency-svc-wtmw6 [745.994473ms] Mar 22 13:25:57.905: INFO: Created: latency-svc-5rw4v Mar 22 13:25:57.921: INFO: Got endpoints: latency-svc-5rw4v [734.806989ms] Mar 22 13:25:57.950: INFO: Created: latency-svc-mxwnf Mar 22 13:25:57.964: INFO: Got endpoints: latency-svc-mxwnf [736.157675ms] Mar 22 13:25:58.016: INFO: Created: latency-svc-z6pvz Mar 22 13:25:58.018: INFO: Got endpoints: latency-svc-z6pvz [741.871633ms] Mar 22 13:25:58.085: INFO: Created: latency-svc-dqnb7 Mar 22 13:25:58.171: INFO: Got endpoints: latency-svc-dqnb7 [840.662079ms] Mar 22 13:25:58.174: INFO: Created: latency-svc-w5np9 Mar 22 13:25:58.196: INFO: Got endpoints: latency-svc-w5np9 [794.553026ms] Mar 22 13:25:58.226: INFO: Created: latency-svc-tx8x7 Mar 22 13:25:58.235: INFO: Got endpoints: latency-svc-tx8x7 [760.775928ms] Mar 22 13:25:58.266: INFO: Created: latency-svc-4rjvt Mar 22 13:25:58.295: INFO: Got endpoints: latency-svc-4rjvt [784.322833ms] Mar 22 13:25:58.321: INFO: Created: latency-svc-wxsv6 Mar 22 13:25:58.332: INFO: Got endpoints: latency-svc-wxsv6 [790.011614ms] Mar 22 13:25:58.352: INFO: Created: latency-svc-qbl4f Mar 22 13:25:58.368: INFO: Got endpoints: latency-svc-qbl4f [759.397284ms] Mar 22 13:25:58.388: INFO: Created: latency-svc-cmzfw Mar 22 13:25:58.428: INFO: Got endpoints: latency-svc-cmzfw [796.29048ms] Mar 22 13:25:58.442: INFO: Created: latency-svc-zdxct Mar 22 13:25:58.458: INFO: Got endpoints: latency-svc-zdxct [789.754082ms] Mar 22 13:25:58.481: INFO: Created: latency-svc-qxltl Mar 22 13:25:58.495: INFO: Got endpoints: latency-svc-qxltl [790.194614ms] Mar 22 13:25:58.517: INFO: Created: latency-svc-lp6xx Mar 22 13:25:58.560: INFO: Got endpoints: latency-svc-lp6xx [804.740796ms] Mar 22 13:25:58.565: INFO: Created: latency-svc-ljqbh Mar 22 13:25:58.579: INFO: Got endpoints: latency-svc-ljqbh [783.786689ms] Mar 22 13:25:58.598: INFO: Created: latency-svc-nkgdg Mar 22 13:25:58.615: INFO: Got endpoints: latency-svc-nkgdg [737.374751ms] Mar 22 13:25:58.646: INFO: Created: latency-svc-xxpzr Mar 22 13:25:58.740: INFO: Got endpoints: latency-svc-xxpzr [818.660127ms] Mar 22 13:25:58.742: INFO: Created: latency-svc-524kt Mar 22 13:25:58.765: INFO: Got endpoints: latency-svc-524kt [801.713778ms] Mar 22 13:25:58.820: INFO: Created: latency-svc-zjbrt Mar 22 13:25:58.832: INFO: Got endpoints: latency-svc-zjbrt [814.286798ms] Mar 22 13:25:58.890: INFO: Created: latency-svc-2q25j Mar 22 13:25:58.893: INFO: Got endpoints: latency-svc-2q25j [721.478671ms] Mar 22 13:25:58.919: INFO: Created: latency-svc-879jd Mar 22 13:25:58.934: INFO: Got endpoints: latency-svc-879jd [738.589851ms] Mar 22 13:25:58.956: INFO: Created: latency-svc-w8k78 Mar 22 13:25:58.977: INFO: Got endpoints: latency-svc-w8k78 [742.472342ms] Mar 22 13:25:59.036: INFO: Created: latency-svc-j4jqf Mar 22 13:25:59.055: INFO: Got endpoints: latency-svc-j4jqf [759.584423ms] Mar 22 13:25:59.090: INFO: Created: latency-svc-c7rkx Mar 22 13:25:59.103: INFO: Got endpoints: latency-svc-c7rkx [771.768694ms] Mar 22 13:25:59.123: INFO: Created: latency-svc-b4ggp Mar 22 13:25:59.165: INFO: Got endpoints: latency-svc-b4ggp [797.193582ms] Mar 22 13:25:59.171: INFO: Created: latency-svc-8pkc7 Mar 22 13:25:59.188: INFO: Got endpoints: latency-svc-8pkc7 [759.54856ms] Mar 22 13:25:59.217: INFO: Created: latency-svc-xqxkp Mar 22 13:25:59.243: INFO: Got endpoints: latency-svc-xqxkp [784.262336ms] Mar 22 13:25:59.263: INFO: Created: latency-svc-k7nrs Mar 22 13:25:59.320: INFO: Got endpoints: latency-svc-k7nrs [825.699516ms] Mar 22 13:25:59.346: INFO: Created: latency-svc-mm7sn Mar 22 13:25:59.357: INFO: Got endpoints: latency-svc-mm7sn [796.458928ms] Mar 22 13:25:59.378: INFO: Created: latency-svc-9jnwk Mar 22 13:25:59.393: INFO: Got endpoints: latency-svc-9jnwk [814.44386ms] Mar 22 13:25:59.415: INFO: Created: latency-svc-nbdpf Mar 22 13:25:59.453: INFO: Got endpoints: latency-svc-nbdpf [837.362733ms] Mar 22 13:25:59.468: INFO: Created: latency-svc-nmc7z Mar 22 13:25:59.483: INFO: Got endpoints: latency-svc-nmc7z [743.131525ms] Mar 22 13:25:59.507: INFO: Created: latency-svc-jjdhb Mar 22 13:25:59.520: INFO: Got endpoints: latency-svc-jjdhb [754.289708ms] Mar 22 13:25:59.543: INFO: Created: latency-svc-whrl4 Mar 22 13:25:59.584: INFO: Got endpoints: latency-svc-whrl4 [751.525489ms] Mar 22 13:25:59.600: INFO: Created: latency-svc-5qps2 Mar 22 13:25:59.616: INFO: Got endpoints: latency-svc-5qps2 [723.517323ms] Mar 22 13:25:59.636: INFO: Created: latency-svc-9x6l9 Mar 22 13:25:59.646: INFO: Got endpoints: latency-svc-9x6l9 [711.779389ms] Mar 22 13:25:59.670: INFO: Created: latency-svc-gn8lw Mar 22 13:25:59.683: INFO: Got endpoints: latency-svc-gn8lw [705.388349ms] Mar 22 13:25:59.729: INFO: Created: latency-svc-td77k Mar 22 13:25:59.731: INFO: Got endpoints: latency-svc-td77k [676.076519ms] Mar 22 13:25:59.780: INFO: Created: latency-svc-7qqmx Mar 22 13:25:59.791: INFO: Got endpoints: latency-svc-7qqmx [687.677028ms] Mar 22 13:25:59.822: INFO: Created: latency-svc-4w9ws Mar 22 13:25:59.891: INFO: Got endpoints: latency-svc-4w9ws [725.852064ms] Mar 22 13:25:59.934: INFO: Created: latency-svc-rd664 Mar 22 13:25:59.948: INFO: Got endpoints: latency-svc-rd664 [760.104483ms] Mar 22 13:25:59.970: INFO: Created: latency-svc-5k9dt Mar 22 13:26:00.034: INFO: Got endpoints: latency-svc-5k9dt [790.881329ms] Mar 22 13:26:00.035: INFO: Created: latency-svc-grzz2 Mar 22 13:26:00.044: INFO: Got endpoints: latency-svc-grzz2 [723.618084ms] Mar 22 13:26:00.067: INFO: Created: latency-svc-8l6ss Mar 22 13:26:00.080: INFO: Got endpoints: latency-svc-8l6ss [723.700056ms] Mar 22 13:26:00.101: INFO: Created: latency-svc-dq97v Mar 22 13:26:00.117: INFO: Got endpoints: latency-svc-dq97v [723.744376ms] Mar 22 13:26:00.161: INFO: Created: latency-svc-9ggx6 Mar 22 13:26:00.177: INFO: Got endpoints: latency-svc-9ggx6 [724.505292ms] Mar 22 13:26:00.199: INFO: Created: latency-svc-cm9wn Mar 22 13:26:00.213: INFO: Got endpoints: latency-svc-cm9wn [730.025164ms] Mar 22 13:26:00.236: INFO: Created: latency-svc-nngdz Mar 22 13:26:00.244: INFO: Got endpoints: latency-svc-nngdz [723.954165ms] Mar 22 13:26:00.297: INFO: Created: latency-svc-hvx97 Mar 22 13:26:00.300: INFO: Got endpoints: latency-svc-hvx97 [716.100944ms] Mar 22 13:26:00.323: INFO: Created: latency-svc-5glbr Mar 22 13:26:00.334: INFO: Got endpoints: latency-svc-5glbr [717.760172ms] Mar 22 13:26:00.372: INFO: Created: latency-svc-gvrlz Mar 22 13:26:00.382: INFO: Got endpoints: latency-svc-gvrlz [736.012645ms] Mar 22 13:26:00.435: INFO: Created: latency-svc-vm6vq Mar 22 13:26:00.445: INFO: Got endpoints: latency-svc-vm6vq [762.527542ms] Mar 22 13:26:00.470: INFO: Created: latency-svc-fsmj6 Mar 22 13:26:00.487: INFO: Got endpoints: latency-svc-fsmj6 [755.616701ms] Mar 22 13:26:00.515: INFO: Created: latency-svc-xkkdf Mar 22 13:26:00.534: INFO: Got endpoints: latency-svc-xkkdf [742.278492ms] Mar 22 13:26:00.576: INFO: Created: latency-svc-k7jlc Mar 22 13:26:00.587: INFO: Got endpoints: latency-svc-k7jlc [696.513658ms] Mar 22 13:26:00.614: INFO: Created: latency-svc-shf68 Mar 22 13:26:00.630: INFO: Got endpoints: latency-svc-shf68 [682.124446ms] Mar 22 13:26:00.650: INFO: Created: latency-svc-7v9cc Mar 22 13:26:00.710: INFO: Got endpoints: latency-svc-7v9cc [676.286868ms] Mar 22 13:26:00.731: INFO: Created: latency-svc-gbhfc Mar 22 13:26:00.745: INFO: Got endpoints: latency-svc-gbhfc [700.406123ms] Mar 22 13:26:00.761: INFO: Created: latency-svc-pj9bk Mar 22 13:26:00.775: INFO: Got endpoints: latency-svc-pj9bk [694.0438ms] Mar 22 13:26:00.806: INFO: Created: latency-svc-8cz4b Mar 22 13:26:00.873: INFO: Got endpoints: latency-svc-8cz4b [755.647818ms] Mar 22 13:26:00.875: INFO: Created: latency-svc-sqfts Mar 22 13:26:00.883: INFO: Got endpoints: latency-svc-sqfts [705.953246ms] Mar 22 13:26:00.905: INFO: Created: latency-svc-8gjl5 Mar 22 13:26:00.920: INFO: Got endpoints: latency-svc-8gjl5 [706.02281ms] Mar 22 13:26:00.942: INFO: Created: latency-svc-zglmw Mar 22 13:26:00.956: INFO: Got endpoints: latency-svc-zglmw [711.720644ms] Mar 22 13:26:01.016: INFO: Created: latency-svc-kh7d2 Mar 22 13:26:01.033: INFO: Got endpoints: latency-svc-kh7d2 [733.128549ms] Mar 22 13:26:01.064: INFO: Created: latency-svc-lslqh Mar 22 13:26:01.097: INFO: Got endpoints: latency-svc-lslqh [763.279234ms] Mar 22 13:26:01.159: INFO: Created: latency-svc-pplpn Mar 22 13:26:01.164: INFO: Got endpoints: latency-svc-pplpn [781.543122ms] Mar 22 13:26:01.184: INFO: Created: latency-svc-pvgc2 Mar 22 13:26:01.195: INFO: Got endpoints: latency-svc-pvgc2 [749.514116ms] Mar 22 13:26:01.226: INFO: Created: latency-svc-qkg6t Mar 22 13:26:01.243: INFO: Got endpoints: latency-svc-qkg6t [755.959667ms] Mar 22 13:26:01.291: INFO: Created: latency-svc-9cr6r Mar 22 13:26:01.297: INFO: Got endpoints: latency-svc-9cr6r [763.38748ms] Mar 22 13:26:01.320: INFO: Created: latency-svc-jtg5p Mar 22 13:26:01.333: INFO: Got endpoints: latency-svc-jtg5p [745.76872ms] Mar 22 13:26:01.355: INFO: Created: latency-svc-d6cgb Mar 22 13:26:01.364: INFO: Got endpoints: latency-svc-d6cgb [733.13274ms] Mar 22 13:26:01.381: INFO: Created: latency-svc-svdr7 Mar 22 13:26:01.428: INFO: Got endpoints: latency-svc-svdr7 [718.484519ms] Mar 22 13:26:01.442: INFO: Created: latency-svc-dzpmq Mar 22 13:26:01.454: INFO: Got endpoints: latency-svc-dzpmq [709.498768ms] Mar 22 13:26:01.472: INFO: Created: latency-svc-g4zrl Mar 22 13:26:01.485: INFO: Got endpoints: latency-svc-g4zrl [710.16677ms] Mar 22 13:26:01.506: INFO: Created: latency-svc-xrbjw Mar 22 13:26:01.521: INFO: Got endpoints: latency-svc-xrbjw [648.067249ms] Mar 22 13:26:01.567: INFO: Created: latency-svc-gm4v8 Mar 22 13:26:01.574: INFO: Got endpoints: latency-svc-gm4v8 [690.866921ms] Mar 22 13:26:01.628: INFO: Created: latency-svc-bxh7p Mar 22 13:26:01.652: INFO: Got endpoints: latency-svc-bxh7p [732.167386ms] Mar 22 13:26:01.710: INFO: Created: latency-svc-2fjs5 Mar 22 13:26:01.714: INFO: Got endpoints: latency-svc-2fjs5 [757.911992ms] Mar 22 13:26:01.739: INFO: Created: latency-svc-lbxff Mar 22 13:26:01.756: INFO: Got endpoints: latency-svc-lbxff [722.515862ms] Mar 22 13:26:01.775: INFO: Created: latency-svc-h5nr6 Mar 22 13:26:01.786: INFO: Got endpoints: latency-svc-h5nr6 [689.051542ms] Mar 22 13:26:01.805: INFO: Created: latency-svc-j59qj Mar 22 13:26:01.842: INFO: Got endpoints: latency-svc-j59qj [678.218612ms] Mar 22 13:26:01.855: INFO: Created: latency-svc-hjxg8 Mar 22 13:26:01.871: INFO: Got endpoints: latency-svc-hjxg8 [676.190613ms] Mar 22 13:26:01.892: INFO: Created: latency-svc-xsf8j Mar 22 13:26:01.908: INFO: Got endpoints: latency-svc-xsf8j [664.47842ms] Mar 22 13:26:01.938: INFO: Created: latency-svc-fwnm8 Mar 22 13:26:01.974: INFO: Got endpoints: latency-svc-fwnm8 [676.7727ms] Mar 22 13:26:01.979: INFO: Created: latency-svc-r4kxb Mar 22 13:26:01.992: INFO: Got endpoints: latency-svc-r4kxb [658.721784ms] Mar 22 13:26:02.018: INFO: Created: latency-svc-2k2hx Mar 22 13:26:02.028: INFO: Got endpoints: latency-svc-2k2hx [664.768252ms] Mar 22 13:26:02.048: INFO: Created: latency-svc-2hmn8 Mar 22 13:26:02.059: INFO: Got endpoints: latency-svc-2hmn8 [630.710288ms] Mar 22 13:26:02.118: INFO: Created: latency-svc-dbt5h Mar 22 13:26:02.121: INFO: Got endpoints: latency-svc-dbt5h [666.223114ms] Mar 22 13:26:02.166: INFO: Created: latency-svc-wwz2l Mar 22 13:26:02.195: INFO: Got endpoints: latency-svc-wwz2l [710.369763ms] Mar 22 13:26:02.262: INFO: Created: latency-svc-znv2v Mar 22 13:26:02.269: INFO: Got endpoints: latency-svc-znv2v [748.478309ms] Mar 22 13:26:02.288: INFO: Created: latency-svc-5bjzz Mar 22 13:26:02.312: INFO: Got endpoints: latency-svc-5bjzz [737.621475ms] Mar 22 13:26:02.339: INFO: Created: latency-svc-h7znq Mar 22 13:26:02.354: INFO: Got endpoints: latency-svc-h7znq [702.233869ms] Mar 22 13:26:02.405: INFO: Created: latency-svc-5jgrb Mar 22 13:26:02.432: INFO: Created: latency-svc-845cd Mar 22 13:26:02.432: INFO: Got endpoints: latency-svc-5jgrb [717.982485ms] Mar 22 13:26:02.455: INFO: Got endpoints: latency-svc-845cd [699.06723ms] Mar 22 13:26:02.492: INFO: Created: latency-svc-pbwrx Mar 22 13:26:02.536: INFO: Got endpoints: latency-svc-pbwrx [749.709041ms] Mar 22 13:26:02.555: INFO: Created: latency-svc-p529g Mar 22 13:26:02.571: INFO: Got endpoints: latency-svc-p529g [728.713016ms] Mar 22 13:26:02.591: INFO: Created: latency-svc-2cvv5 Mar 22 13:26:02.602: INFO: Got endpoints: latency-svc-2cvv5 [730.460546ms] Mar 22 13:26:02.634: INFO: Created: latency-svc-q98qj Mar 22 13:26:02.668: INFO: Got endpoints: latency-svc-q98qj [760.262094ms] Mar 22 13:26:02.683: INFO: Created: latency-svc-hs69q Mar 22 13:26:02.699: INFO: Got endpoints: latency-svc-hs69q [724.752943ms] Mar 22 13:26:02.744: INFO: Created: latency-svc-f98jk Mar 22 13:26:02.759: INFO: Got endpoints: latency-svc-f98jk [766.564132ms] Mar 22 13:26:02.800: INFO: Created: latency-svc-6p498 Mar 22 13:26:02.875: INFO: Got endpoints: latency-svc-6p498 [846.042003ms] Mar 22 13:26:02.938: INFO: Created: latency-svc-d5p6m Mar 22 13:26:02.940: INFO: Got endpoints: latency-svc-d5p6m [881.242204ms] Mar 22 13:26:02.964: INFO: Created: latency-svc-ljcqr Mar 22 13:26:02.976: INFO: Got endpoints: latency-svc-ljcqr [855.795946ms] Mar 22 13:26:03.009: INFO: Created: latency-svc-m646r Mar 22 13:26:03.037: INFO: Got endpoints: latency-svc-m646r [841.66509ms] Mar 22 13:26:03.124: INFO: Created: latency-svc-lhc6h Mar 22 13:26:03.127: INFO: Got endpoints: latency-svc-lhc6h [857.321875ms] Mar 22 13:26:03.149: INFO: Created: latency-svc-4ftcs Mar 22 13:26:03.175: INFO: Got endpoints: latency-svc-4ftcs [863.25742ms] Mar 22 13:26:03.206: INFO: Created: latency-svc-6tqtr Mar 22 13:26:03.218: INFO: Got endpoints: latency-svc-6tqtr [863.335569ms] Mar 22 13:26:03.262: INFO: Created: latency-svc-vj2fj Mar 22 13:26:03.266: INFO: Got endpoints: latency-svc-vj2fj [833.836693ms] Mar 22 13:26:03.287: INFO: Created: latency-svc-c6n7w Mar 22 13:26:03.296: INFO: Got endpoints: latency-svc-c6n7w [840.644645ms] Mar 22 13:26:03.342: INFO: Created: latency-svc-6ct4d Mar 22 13:26:03.356: INFO: Got endpoints: latency-svc-6ct4d [819.90648ms] Mar 22 13:26:03.399: INFO: Created: latency-svc-jbwxk Mar 22 13:26:03.405: INFO: Got endpoints: latency-svc-jbwxk [833.769335ms] Mar 22 13:26:03.428: INFO: Created: latency-svc-2d7dq Mar 22 13:26:03.441: INFO: Got endpoints: latency-svc-2d7dq [839.368687ms] Mar 22 13:26:03.462: INFO: Created: latency-svc-z6s49 Mar 22 13:26:03.477: INFO: Got endpoints: latency-svc-z6s49 [809.248388ms] Mar 22 13:26:03.538: INFO: Created: latency-svc-hm8nn Mar 22 13:26:03.540: INFO: Got endpoints: latency-svc-hm8nn [841.7735ms] Mar 22 13:26:03.563: INFO: Created: latency-svc-bpc7w Mar 22 13:26:03.573: INFO: Got endpoints: latency-svc-bpc7w [814.67296ms] Mar 22 13:26:03.595: INFO: Created: latency-svc-trqq7 Mar 22 13:26:03.626: INFO: Got endpoints: latency-svc-trqq7 [751.062359ms] Mar 22 13:26:03.674: INFO: Created: latency-svc-7jsgx Mar 22 13:26:03.682: INFO: Got endpoints: latency-svc-7jsgx [741.69467ms] Mar 22 13:26:03.702: INFO: Created: latency-svc-cphw7 Mar 22 13:26:03.718: INFO: Got endpoints: latency-svc-cphw7 [742.048739ms] Mar 22 13:26:03.743: INFO: Created: latency-svc-m4lmh Mar 22 13:26:03.755: INFO: Got endpoints: latency-svc-m4lmh [717.591781ms] Mar 22 13:26:03.812: INFO: Created: latency-svc-945g2 Mar 22 13:26:03.815: INFO: Got endpoints: latency-svc-945g2 [687.690471ms] Mar 22 13:26:03.842: INFO: Created: latency-svc-qpdbn Mar 22 13:26:03.857: INFO: Got endpoints: latency-svc-qpdbn [682.097161ms] Mar 22 13:26:03.878: INFO: Created: latency-svc-bnn2v Mar 22 13:26:03.894: INFO: Got endpoints: latency-svc-bnn2v [676.02167ms] Mar 22 13:26:03.894: INFO: Latencies: [39.335249ms 70.420338ms 143.404986ms 153.396918ms 184.479907ms 214.565092ms 292.924339ms 322.474922ms 358.464542ms 388.69337ms 449.507184ms 479.635984ms 556.70611ms 598.093595ms 625.141772ms 630.710288ms 648.067249ms 652.097776ms 658.721784ms 662.666839ms 664.47842ms 664.768252ms 666.223114ms 676.02167ms 676.076519ms 676.190613ms 676.192529ms 676.286868ms 676.7727ms 678.218612ms 682.097161ms 682.124446ms 682.885928ms 687.677028ms 687.690471ms 688.855012ms 689.051542ms 690.866921ms 692.950281ms 694.0438ms 696.513658ms 699.06723ms 700.01603ms 700.329984ms 700.406123ms 702.233869ms 703.653779ms 705.388349ms 705.622262ms 705.717258ms 705.953246ms 706.02281ms 708.174504ms 709.498768ms 710.16677ms 710.369763ms 711.720644ms 711.779389ms 711.92495ms 716.100944ms 716.200654ms 717.591781ms 717.760172ms 717.982485ms 718.369123ms 718.433989ms 718.484519ms 718.634331ms 719.319182ms 721.098467ms 721.478671ms 721.902431ms 722.006263ms 722.515862ms 723.517323ms 723.618084ms 723.700056ms 723.744376ms 723.954165ms 724.505292ms 724.752943ms 725.852064ms 727.090191ms 728.713016ms 729.557082ms 729.870766ms 730.025164ms 730.052796ms 730.126094ms 730.257155ms 730.460546ms 732.167386ms 732.313153ms 733.128549ms 733.13274ms 734.796752ms 734.806989ms 735.777065ms 736.012645ms 736.037736ms 736.157675ms 736.166981ms 737.374751ms 737.621475ms 738.589851ms 741.69467ms 741.871633ms 741.91975ms 742.048739ms 742.278492ms 742.472342ms 743.131525ms 745.76872ms 745.904349ms 745.994473ms 746.180742ms 746.975817ms 747.817742ms 748.076127ms 748.242261ms 748.260435ms 748.478309ms 749.514116ms 749.709041ms 751.062359ms 751.331303ms 751.525489ms 753.018298ms 754.289708ms 755.616701ms 755.647818ms 755.959667ms 757.911992ms 758.131141ms 758.722195ms 759.397284ms 759.54856ms 759.584423ms 759.789649ms 760.104483ms 760.262094ms 760.775928ms 762.527542ms 763.279234ms 763.38748ms 765.86838ms 766.564132ms 768.517764ms 770.654319ms 771.768694ms 773.036421ms 774.225824ms 777.110674ms 781.543122ms 783.786689ms 784.262336ms 784.322833ms 785.97334ms 789.091295ms 789.754082ms 790.011614ms 790.194614ms 790.881329ms 794.553026ms 796.29048ms 796.458928ms 796.843927ms 796.95623ms 797.193582ms 801.713778ms 804.740796ms 807.714232ms 809.248388ms 812.593605ms 814.286798ms 814.44386ms 814.67296ms 818.660127ms 819.90648ms 821.905468ms 825.699516ms 831.411342ms 831.503565ms 833.769335ms 833.836693ms 837.362733ms 839.368687ms 840.644645ms 840.662079ms 841.66509ms 841.7735ms 846.042003ms 855.795946ms 855.961725ms 857.321875ms 860.545527ms 863.041648ms 863.25742ms 863.335569ms 881.242204ms] Mar 22 13:26:03.894: INFO: 50 %ile: 736.157675ms Mar 22 13:26:03.894: INFO: 90 %ile: 825.699516ms Mar 22 13:26:03.894: INFO: 99 %ile: 863.335569ms Mar 22 13:26:03.894: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:26:03.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5188" for this suite. Mar 22 13:26:25.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:26:26.082: INFO: namespace svc-latency-5188 deletion completed in 22.113818349s • [SLOW TEST:35.360 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:26:26.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-34195b21-b964-439f-99de-79c66eda9b89 STEP: Creating secret with name secret-projected-all-test-volume-ef900c7a-4ef7-49f3-9bb0-796985fbf21d STEP: Creating a pod to test Check all projections for projected volume plugin Mar 22 13:26:26.169: INFO: Waiting up to 5m0s for pod "projected-volume-3c3d76bc-a885-47ce-a94f-3d7ec84b5904" in namespace "projected-3083" to be "success or failure" Mar 22 13:26:26.178: INFO: Pod "projected-volume-3c3d76bc-a885-47ce-a94f-3d7ec84b5904": Phase="Pending", Reason="", readiness=false. Elapsed: 9.342247ms Mar 22 13:26:28.202: INFO: Pod "projected-volume-3c3d76bc-a885-47ce-a94f-3d7ec84b5904": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033064017s Mar 22 13:26:30.221: INFO: Pod "projected-volume-3c3d76bc-a885-47ce-a94f-3d7ec84b5904": Phase="Running", Reason="", readiness=true. Elapsed: 4.051639551s Mar 22 13:26:32.224: INFO: Pod "projected-volume-3c3d76bc-a885-47ce-a94f-3d7ec84b5904": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055087242s STEP: Saw pod success Mar 22 13:26:32.224: INFO: Pod "projected-volume-3c3d76bc-a885-47ce-a94f-3d7ec84b5904" satisfied condition "success or failure" Mar 22 13:26:32.227: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-3c3d76bc-a885-47ce-a94f-3d7ec84b5904 container projected-all-volume-test: STEP: delete the pod Mar 22 13:26:32.245: INFO: Waiting for pod projected-volume-3c3d76bc-a885-47ce-a94f-3d7ec84b5904 to disappear Mar 22 13:26:32.274: INFO: Pod projected-volume-3c3d76bc-a885-47ce-a94f-3d7ec84b5904 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:26:32.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3083" for this suite. Mar 22 13:26:38.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:26:38.370: INFO: namespace projected-3083 deletion completed in 6.092120819s • [SLOW TEST:12.286 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:26:38.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Mar 22 13:26:38.411: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8725" to be "success or failure" Mar 22 13:26:38.429: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.172947ms Mar 22 13:26:40.448: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036185174s Mar 22 13:26:42.452: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040390068s Mar 22 13:26:44.456: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04510775s STEP: Saw pod success Mar 22 13:26:44.457: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 22 13:26:44.460: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 22 13:26:44.479: INFO: Waiting for pod pod-host-path-test to disappear Mar 22 13:26:44.483: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:26:44.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-8725" for this suite. Mar 22 13:26:50.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:26:50.576: INFO: namespace hostpath-8725 deletion completed in 6.089014473s • [SLOW TEST:12.205 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:26:50.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-24e4e5db-9fe6-4056-ba0a-e231a45d446a STEP: Creating a pod to test consume secrets Mar 22 13:26:50.672: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-84e2d279-401b-4594-8b22-f5179aa034d6" in namespace "projected-3363" to be "success or failure" Mar 22 13:26:50.718: INFO: Pod "pod-projected-secrets-84e2d279-401b-4594-8b22-f5179aa034d6": Phase="Pending", Reason="", readiness=false. Elapsed: 46.017848ms Mar 22 13:26:52.722: INFO: Pod "pod-projected-secrets-84e2d279-401b-4594-8b22-f5179aa034d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050183216s Mar 22 13:26:54.726: INFO: Pod "pod-projected-secrets-84e2d279-401b-4594-8b22-f5179aa034d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054204663s STEP: Saw pod success Mar 22 13:26:54.726: INFO: Pod "pod-projected-secrets-84e2d279-401b-4594-8b22-f5179aa034d6" satisfied condition "success or failure" Mar 22 13:26:54.728: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-84e2d279-401b-4594-8b22-f5179aa034d6 container projected-secret-volume-test: STEP: delete the pod Mar 22 13:26:54.749: INFO: Waiting for pod pod-projected-secrets-84e2d279-401b-4594-8b22-f5179aa034d6 to disappear Mar 22 13:26:54.753: INFO: Pod pod-projected-secrets-84e2d279-401b-4594-8b22-f5179aa034d6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:26:54.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3363" for this suite. Mar 22 13:27:00.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:27:00.851: INFO: namespace projected-3363 deletion completed in 6.094861469s • [SLOW TEST:10.274 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:27:00.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Mar 22 13:27:00.909: INFO: Waiting up to 5m0s for pod "client-containers-7d36f6cc-0636-4a7b-83b3-7650835e6d1d" in namespace "containers-8695" to be "success or failure" Mar 22 13:27:00.912: INFO: Pod "client-containers-7d36f6cc-0636-4a7b-83b3-7650835e6d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.54627ms Mar 22 13:27:02.916: INFO: Pod "client-containers-7d36f6cc-0636-4a7b-83b3-7650835e6d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007697024s Mar 22 13:27:04.922: INFO: Pod "client-containers-7d36f6cc-0636-4a7b-83b3-7650835e6d1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01392088s STEP: Saw pod success Mar 22 13:27:04.923: INFO: Pod "client-containers-7d36f6cc-0636-4a7b-83b3-7650835e6d1d" satisfied condition "success or failure" Mar 22 13:27:04.926: INFO: Trying to get logs from node iruya-worker pod client-containers-7d36f6cc-0636-4a7b-83b3-7650835e6d1d container test-container: STEP: delete the pod Mar 22 13:27:04.957: INFO: Waiting for pod client-containers-7d36f6cc-0636-4a7b-83b3-7650835e6d1d to disappear Mar 22 13:27:04.966: INFO: Pod client-containers-7d36f6cc-0636-4a7b-83b3-7650835e6d1d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:27:04.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8695" for this suite. Mar 22 13:27:10.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:27:11.118: INFO: namespace containers-8695 deletion completed in 6.148226678s • [SLOW TEST:10.267 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:27:11.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 22 13:27:19.224: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 13:27:19.239: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 13:27:21.239: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 13:27:21.243: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 13:27:23.239: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 13:27:23.243: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 13:27:25.239: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 13:27:25.243: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 13:27:27.239: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 13:27:27.243: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 13:27:29.239: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 13:27:29.251: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 13:27:31.239: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 13:27:31.243: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 13:27:33.239: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 13:27:33.243: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 13:27:35.239: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 13:27:35.245: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 13:27:37.239: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 13:27:37.244: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 13:27:39.239: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 13:27:39.287: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 13:27:41.239: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 13:27:41.243: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 13:27:43.239: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 13:27:43.269: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:27:43.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3960" for this suite. Mar 22 13:28:05.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:28:05.398: INFO: namespace container-lifecycle-hook-3960 deletion completed in 22.119540126s • [SLOW TEST:54.279 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:28:05.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 22 13:28:05.551: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 13:28:05.554: INFO: Number of nodes with available pods: 0 Mar 22 13:28:05.554: INFO: Node iruya-worker is running more than one daemon pod Mar 22 13:28:06.559: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 13:28:06.561: INFO: Number of nodes with available pods: 0 Mar 22 13:28:06.561: INFO: Node iruya-worker is running more than one daemon pod Mar 22 13:28:07.712: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 13:28:07.715: INFO: Number of nodes with available pods: 0 Mar 22 13:28:07.715: INFO: Node iruya-worker is running more than one daemon pod Mar 22 13:28:08.560: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 13:28:08.564: INFO: Number of nodes with available pods: 0 Mar 22 13:28:08.564: INFO: Node iruya-worker is running more than one daemon pod Mar 22 13:28:09.563: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 13:28:09.565: INFO: Number of nodes with available pods: 2 Mar 22 13:28:09.565: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 22 13:28:09.579: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 13:28:09.582: INFO: Number of nodes with available pods: 1 Mar 22 13:28:09.583: INFO: Node iruya-worker is running more than one daemon pod Mar 22 13:28:10.589: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 13:28:10.591: INFO: Number of nodes with available pods: 1 Mar 22 13:28:10.591: INFO: Node iruya-worker is running more than one daemon pod Mar 22 13:28:11.588: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 13:28:11.591: INFO: Number of nodes with available pods: 1 Mar 22 13:28:11.591: INFO: Node iruya-worker is running more than one daemon pod Mar 22 13:28:12.588: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 13:28:12.592: INFO: Number of nodes with available pods: 1 Mar 22 13:28:12.592: INFO: Node iruya-worker is running more than one daemon pod Mar 22 13:28:14.588: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 13:28:14.600: INFO: Number of nodes with available pods: 1 Mar 22 13:28:14.600: INFO: Node iruya-worker is running more than one daemon pod Mar 22 13:28:15.587: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 13:28:15.591: INFO: Number of nodes with available pods: 1 Mar 22 13:28:15.591: INFO: Node iruya-worker is running more than one daemon pod Mar 22 13:28:16.588: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 13:28:16.592: INFO: Number of nodes with available pods: 1 Mar 22 13:28:16.592: INFO: Node iruya-worker is running more than one daemon pod Mar 22 13:28:17.587: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 13:28:17.591: INFO: Number of nodes with available pods: 2 Mar 22 13:28:17.591: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2091, will wait for the garbage collector to delete the pods Mar 22 13:28:17.654: INFO: Deleting DaemonSet.extensions daemon-set took: 6.6408ms Mar 22 13:28:17.955: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.265078ms Mar 22 13:28:20.859: INFO: Number of nodes with available pods: 0 Mar 22 13:28:20.859: INFO: Number of running nodes: 0, number of available pods: 0 Mar 22 13:28:20.865: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2091/daemonsets","resourceVersion":"1240574"},"items":null} Mar 22 13:28:20.868: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2091/pods","resourceVersion":"1240574"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:28:20.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2091" for this suite. Mar 22 13:28:26.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:28:26.970: INFO: namespace daemonsets-2091 deletion completed in 6.090158173s • [SLOW TEST:21.572 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:28:26.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Mar 22 13:28:27.032: INFO: Waiting up to 5m0s for pod "client-containers-e7728a2a-a4e6-483d-8d91-df22adc7a7f3" in namespace "containers-9644" to be "success or failure" Mar 22 13:28:27.054: INFO: Pod "client-containers-e7728a2a-a4e6-483d-8d91-df22adc7a7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 21.778497ms Mar 22 13:28:29.060: INFO: Pod "client-containers-e7728a2a-a4e6-483d-8d91-df22adc7a7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027782756s Mar 22 13:28:31.064: INFO: Pod "client-containers-e7728a2a-a4e6-483d-8d91-df22adc7a7f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032423496s STEP: Saw pod success Mar 22 13:28:31.064: INFO: Pod "client-containers-e7728a2a-a4e6-483d-8d91-df22adc7a7f3" satisfied condition "success or failure" Mar 22 13:28:31.067: INFO: Trying to get logs from node iruya-worker pod client-containers-e7728a2a-a4e6-483d-8d91-df22adc7a7f3 container test-container: STEP: delete the pod Mar 22 13:28:31.084: INFO: Waiting for pod client-containers-e7728a2a-a4e6-483d-8d91-df22adc7a7f3 to disappear Mar 22 13:28:31.088: INFO: Pod client-containers-e7728a2a-a4e6-483d-8d91-df22adc7a7f3 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:28:31.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9644" for this suite. Mar 22 13:28:37.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:28:37.178: INFO: namespace containers-9644 deletion completed in 6.086709146s • [SLOW TEST:10.207 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:28:37.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 13:28:37.277: INFO: Creating deployment "nginx-deployment" Mar 22 13:28:37.281: INFO: Waiting for observed generation 1 Mar 22 13:28:39.432: INFO: Waiting for all required pods to come up Mar 22 13:28:39.438: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 22 13:28:49.446: INFO: Waiting for deployment "nginx-deployment" to complete Mar 22 13:28:49.453: INFO: Updating deployment "nginx-deployment" with a non-existent image Mar 22 13:28:49.460: INFO: Updating deployment nginx-deployment Mar 22 13:28:49.460: INFO: Waiting for observed generation 2 Mar 22 13:28:51.470: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 22 13:28:51.473: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 22 13:28:51.475: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 22 13:28:51.482: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 22 13:28:51.482: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 22 13:28:51.484: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 22 13:28:51.490: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Mar 22 13:28:51.490: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Mar 22 13:28:51.496: INFO: Updating deployment nginx-deployment Mar 22 13:28:51.496: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Mar 22 13:28:51.504: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 22 13:28:51.563: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 22 13:28:51.727: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-7969,SelfLink:/apis/apps/v1/namespaces/deployment-7969/deployments/nginx-deployment,UID:ab690924-e505-4b29-9d45-cae6036f45ed,ResourceVersion:1240867,Generation:3,CreationTimestamp:2020-03-22 13:28:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-03-22 13:28:50 +0000 UTC 2020-03-22 13:28:37 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-03-22 13:28:51 +0000 UTC 2020-03-22 13:28:51 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Mar 22 13:28:51.791: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-7969,SelfLink:/apis/apps/v1/namespaces/deployment-7969/replicasets/nginx-deployment-55fb7cb77f,UID:f698b4ea-a76e-44c5-8d72-811c80aec143,ResourceVersion:1240910,Generation:3,CreationTimestamp:2020-03-22 13:28:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ab690924-e505-4b29-9d45-cae6036f45ed 0xc003361f07 0xc003361f08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 22 13:28:51.791: INFO: All old ReplicaSets of Deployment "nginx-deployment": Mar 22 13:28:51.791: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-7969,SelfLink:/apis/apps/v1/namespaces/deployment-7969/replicasets/nginx-deployment-7b8c6f4498,UID:559bbcff-b26e-4e6b-a629-b2ae7ec4aca7,ResourceVersion:1240908,Generation:3,CreationTimestamp:2020-03-22 13:28:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ab690924-e505-4b29-9d45-cae6036f45ed 0xc003361fd7 0xc003361fd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Mar 22 13:28:51.940: INFO: Pod "nginx-deployment-55fb7cb77f-6mxn4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6mxn4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-55fb7cb77f-6mxn4,UID:5262c5d8-d907-4913-bbe6-fcd23084ecd3,ResourceVersion:1240850,Generation:0,CreationTimestamp:2020-03-22 13:28:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f698b4ea-a76e-44c5-8d72-811c80aec143 0xc001494557 0xc001494558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014945d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014945f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-22 13:28:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.941: INFO: Pod "nginx-deployment-55fb7cb77f-6rjmf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6rjmf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-55fb7cb77f-6rjmf,UID:dde1950f-8c14-4ee1-a262-4221ebdf644d,ResourceVersion:1240817,Generation:0,CreationTimestamp:2020-03-22 13:28:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f698b4ea-a76e-44c5-8d72-811c80aec143 0xc0014946c0 0xc0014946c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001494740} {node.kubernetes.io/unreachable Exists NoExecute 0xc001494760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-22 13:28:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.941: INFO: Pod "nginx-deployment-55fb7cb77f-6trbh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6trbh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-55fb7cb77f-6trbh,UID:33f3e08a-5a9c-40d6-9477-038ed03758ee,ResourceVersion:1240898,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f698b4ea-a76e-44c5-8d72-811c80aec143 0xc001494830 0xc001494831}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014948b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014948d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.941: INFO: Pod "nginx-deployment-55fb7cb77f-b2mjs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b2mjs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-55fb7cb77f-b2mjs,UID:80784a91-e4a4-4a6e-896b-5b8169ac6056,ResourceVersion:1240849,Generation:0,CreationTimestamp:2020-03-22 13:28:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f698b4ea-a76e-44c5-8d72-811c80aec143 0xc001494957 0xc001494958}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014949d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014949f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-22 13:28:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.941: INFO: Pod "nginx-deployment-55fb7cb77f-b5w2q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b5w2q,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-55fb7cb77f-b5w2q,UID:18b22828-5eb8-4afd-8f62-71fce4bb60dd,ResourceVersion:1240877,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f698b4ea-a76e-44c5-8d72-811c80aec143 0xc001494ac0 0xc001494ac1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001494b40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001494b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.942: INFO: Pod "nginx-deployment-55fb7cb77f-c8zpq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-c8zpq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-55fb7cb77f-c8zpq,UID:98149cf5-80f5-4568-8a28-0d75858a57d6,ResourceVersion:1240896,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f698b4ea-a76e-44c5-8d72-811c80aec143 0xc001494be7 0xc001494be8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001494c60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001494c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.942: INFO: Pod "nginx-deployment-55fb7cb77f-ctv67" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ctv67,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-55fb7cb77f-ctv67,UID:a42f0914-7cb9-47af-9ed2-a675d1426f77,ResourceVersion:1240879,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f698b4ea-a76e-44c5-8d72-811c80aec143 0xc001494d07 0xc001494d08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001494d80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001494da0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.942: INFO: Pod "nginx-deployment-55fb7cb77f-fbpxs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fbpxs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-55fb7cb77f-fbpxs,UID:cc2bae29-4ed8-43e7-af73-86c0f795d17a,ResourceVersion:1240870,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f698b4ea-a76e-44c5-8d72-811c80aec143 0xc001494e27 0xc001494e28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001494ea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001494ec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.942: INFO: Pod "nginx-deployment-55fb7cb77f-j5nxj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j5nxj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-55fb7cb77f-j5nxj,UID:563b1a09-6e9d-47ef-9002-6b5d90cd6d01,ResourceVersion:1240894,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f698b4ea-a76e-44c5-8d72-811c80aec143 0xc001494f47 0xc001494f48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001494fc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001494fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.942: INFO: Pod "nginx-deployment-55fb7cb77f-l4kml" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-l4kml,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-55fb7cb77f-l4kml,UID:20a94b02-d6fd-4593-9d99-fbc014b750f4,ResourceVersion:1240843,Generation:0,CreationTimestamp:2020-03-22 13:28:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f698b4ea-a76e-44c5-8d72-811c80aec143 0xc001495067 0xc001495068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014950e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001495100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-22 13:28:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.943: INFO: Pod "nginx-deployment-55fb7cb77f-lj8dc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lj8dc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-55fb7cb77f-lj8dc,UID:a7a7955c-d8b9-4238-bc0b-1073fce05c79,ResourceVersion:1240829,Generation:0,CreationTimestamp:2020-03-22 13:28:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f698b4ea-a76e-44c5-8d72-811c80aec143 0xc0014951d0 0xc0014951d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001495250} {node.kubernetes.io/unreachable Exists NoExecute 0xc001495270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-22 13:28:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.943: INFO: Pod "nginx-deployment-55fb7cb77f-tfr5r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tfr5r,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-55fb7cb77f-tfr5r,UID:28ffedcb-ef24-4aae-86db-a0b9a35ed5a5,ResourceVersion:1240897,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f698b4ea-a76e-44c5-8d72-811c80aec143 0xc001495340 0xc001495341}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014953c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014953e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.943: INFO: Pod "nginx-deployment-55fb7cb77f-vfxhc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vfxhc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-55fb7cb77f-vfxhc,UID:8ce9f4ab-186e-4fa2-a2ed-155e4f257bd6,ResourceVersion:1240901,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f698b4ea-a76e-44c5-8d72-811c80aec143 0xc001495467 0xc001495468}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014954e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001495500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.943: INFO: Pod "nginx-deployment-7b8c6f4498-25mrd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-25mrd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-25mrd,UID:634f34c8-b4cf-4fbf-87fe-d8d68d148c2d,ResourceVersion:1240772,Generation:0,CreationTimestamp:2020-03-22 13:28:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc001495587 0xc001495588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001495600} {node.kubernetes.io/unreachable Exists NoExecute 0xc001495620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.212,StartTime:2020-03-22 13:28:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-22 13:28:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ae66ad8181a7e5db8b2cc788b3d081ce6586d88606918b957a13a6e273277819}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.943: INFO: Pod "nginx-deployment-7b8c6f4498-4j77t" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4j77t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-4j77t,UID:69fc8d49-2b03-4378-bd9a-879d2ee960a5,ResourceVersion:1240746,Generation:0,CreationTimestamp:2020-03-22 13:28:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc0014956f7 0xc0014956f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001495770} {node.kubernetes.io/unreachable Exists NoExecute 0xc001495790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.208,StartTime:2020-03-22 13:28:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-22 13:28:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://fc1de0bcaa9c56cb76513bdbc764df50a02daa520d28fcdf479a40d06177ef05}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.944: INFO: Pod "nginx-deployment-7b8c6f4498-5bhhl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5bhhl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-5bhhl,UID:c686c5ab-bc2d-4309-9a24-a30a264131b9,ResourceVersion:1240906,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc001495877 0xc001495878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014958f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001495910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.944: INFO: Pod "nginx-deployment-7b8c6f4498-6qsvl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6qsvl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-6qsvl,UID:9acdf523-36a8-4c3b-962a-9ffeec8e8af9,ResourceVersion:1240774,Generation:0,CreationTimestamp:2020-03-22 13:28:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc001495997 0xc001495998}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001495a10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001495a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.210,StartTime:2020-03-22 13:28:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-22 13:28:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3094bb175c2a5eceb509f33d504b8c7262584f71409e11a19d9840ed180ba6ed}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.944: INFO: Pod "nginx-deployment-7b8c6f4498-7m2k2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7m2k2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-7m2k2,UID:4b31ae3a-970a-45f9-a57a-ab912194f3e0,ResourceVersion:1240755,Generation:0,CreationTimestamp:2020-03-22 13:28:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc001495b17 0xc001495b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001495b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001495bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.209,StartTime:2020-03-22 13:28:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-22 13:28:44 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://be093b5c4fa0f7e1d6451ee057f607b2614aeb551a40548e60002a48da19bfe2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.944: INFO: Pod "nginx-deployment-7b8c6f4498-7thsg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7thsg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-7thsg,UID:930b83b0-bc26-4008-b17b-e75fcd06bd69,ResourceVersion:1240904,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc001495c87 0xc001495c88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001495d00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001495d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.944: INFO: Pod "nginx-deployment-7b8c6f4498-7w6k2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7w6k2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-7w6k2,UID:4629f206-6def-4fc5-98d3-700dabcc3440,ResourceVersion:1240900,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc001495da7 0xc001495da8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001495e20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001495e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-22 13:28:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.945: INFO: Pod "nginx-deployment-7b8c6f4498-c82g9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-c82g9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-c82g9,UID:edea260e-28a7-4e6b-a3a2-b55a274f8212,ResourceVersion:1240905,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc001495f07 0xc001495f08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001495f80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001495fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.945: INFO: Pod "nginx-deployment-7b8c6f4498-fd992" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fd992,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-fd992,UID:22381386-bb04-4d17-af22-195d1f0387fd,ResourceVersion:1240903,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc003380027 0xc003380028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033800a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033800c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.945: INFO: Pod "nginx-deployment-7b8c6f4498-grfj2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-grfj2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-grfj2,UID:6d2b1e5d-2dfd-4405-9d78-12084b88944a,ResourceVersion:1240885,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc003380147 0xc003380148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033801c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033801e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.945: INFO: Pod "nginx-deployment-7b8c6f4498-h2knv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h2knv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-h2knv,UID:3a0319aa-903b-486f-94ae-256ab2b73470,ResourceVersion:1240888,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc003380277 0xc003380278}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033802f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003380310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.946: INFO: Pod "nginx-deployment-7b8c6f4498-jtzd9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jtzd9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-jtzd9,UID:c0b0d1c4-5547-4591-b784-70b0d148bb8b,ResourceVersion:1240782,Generation:0,CreationTimestamp:2020-03-22 13:28:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc003380397 0xc003380398}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003380410} {node.kubernetes.io/unreachable Exists NoExecute 0xc003380430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.64,StartTime:2020-03-22 13:28:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-22 13:28:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0a320b046f034a69cd47196a43d2f22614ef1e50671da3426fe35f781aad8624}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.946: INFO: Pod "nginx-deployment-7b8c6f4498-jw288" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jw288,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-jw288,UID:56e5e671-99a2-4670-b5ba-f031527da40f,ResourceVersion:1240750,Generation:0,CreationTimestamp:2020-03-22 13:28:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc003380507 0xc003380508}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003380580} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033805a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.61,StartTime:2020-03-22 13:28:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-22 13:28:43 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f4b56d63eaf86756ebc338057356663dd4dfae7d20a06fe38d3b649264954baf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.946: INFO: Pod "nginx-deployment-7b8c6f4498-kmcsh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kmcsh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-kmcsh,UID:83ba22f6-89e9-4b68-b584-c959f8be88cf,ResourceVersion:1240788,Generation:0,CreationTimestamp:2020-03-22 13:28:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc003380677 0xc003380678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033806f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003380710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.63,StartTime:2020-03-22 13:28:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-22 13:28:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2a4d948f41acd7d2ed042ac40fc23dd13266fa9e6de5b72db20274fa8f73473a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.946: INFO: Pod "nginx-deployment-7b8c6f4498-n4cwt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n4cwt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-n4cwt,UID:77a64351-8f90-42ee-a14a-6134dbe43772,ResourceVersion:1240911,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc0033807e7 0xc0033807e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003380860} {node.kubernetes.io/unreachable Exists NoExecute 0xc003380880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-22 13:28:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.946: INFO: Pod "nginx-deployment-7b8c6f4498-rb7f4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rb7f4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-rb7f4,UID:877a4cda-8692-4f47-8068-a6509499c48d,ResourceVersion:1240907,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc003380947 0xc003380948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033809c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033809e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.947: INFO: Pod "nginx-deployment-7b8c6f4498-rlf7d" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rlf7d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-rlf7d,UID:09eba8a2-5a48-4d6b-aaca-c61f80b7b6c2,ResourceVersion:1240747,Generation:0,CreationTimestamp:2020-03-22 13:28:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc003380a67 0xc003380a68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003380ae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003380b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.60,StartTime:2020-03-22 13:28:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-22 13:28:43 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e72b235c82e19c0cc432395f076e76685c76e8de780a707ec5dee76ea66f70ff}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.947: INFO: Pod "nginx-deployment-7b8c6f4498-sz774" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sz774,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-sz774,UID:b918af27-05a8-4295-bef1-a15a7b1b1167,ResourceVersion:1240884,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc003380bd7 0xc003380bd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003380c50} {node.kubernetes.io/unreachable Exists NoExecute 0xc003380c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.947: INFO: Pod "nginx-deployment-7b8c6f4498-zgh6d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zgh6d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-zgh6d,UID:1969a8ce-d2a9-425e-8b96-51564bce2a4f,ResourceVersion:1240916,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc003380cf7 0xc003380cf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003380d70} {node.kubernetes.io/unreachable Exists NoExecute 0xc003380d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-22 13:28:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 22 13:28:51.947: INFO: Pod "nginx-deployment-7b8c6f4498-zvk8w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zvk8w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7969,SelfLink:/api/v1/namespaces/deployment-7969/pods/nginx-deployment-7b8c6f4498-zvk8w,UID:768835ff-d715-43a3-b6dc-117da2e0f4ec,ResourceVersion:1240889,Generation:0,CreationTimestamp:2020-03-22 13:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 559bbcff-b26e-4e6b-a629-b2ae7ec4aca7 0xc003380e57 0xc003380e58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gh7ht {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gh7ht,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gh7ht true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003380ed0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003380ef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:28:51 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:28:51.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7969" for this suite. Mar 22 13:29:10.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:29:10.204: INFO: namespace deployment-7969 deletion completed in 18.19525438s • [SLOW TEST:33.026 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:29:10.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-14d92699-8cae-47ca-a684-975f5afa367a STEP: Creating a pod to test consume secrets Mar 22 13:29:10.351: INFO: Waiting up to 5m0s for pod "pod-secrets-1e8b7aed-d080-450c-902a-2d8f175e40a7" in namespace "secrets-4029" to be "success or failure" Mar 22 13:29:10.355: INFO: Pod "pod-secrets-1e8b7aed-d080-450c-902a-2d8f175e40a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04764ms Mar 22 13:29:12.359: INFO: Pod "pod-secrets-1e8b7aed-d080-450c-902a-2d8f175e40a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007889031s Mar 22 13:29:14.363: INFO: Pod "pod-secrets-1e8b7aed-d080-450c-902a-2d8f175e40a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012008612s STEP: Saw pod success Mar 22 13:29:14.363: INFO: Pod "pod-secrets-1e8b7aed-d080-450c-902a-2d8f175e40a7" satisfied condition "success or failure" Mar 22 13:29:14.365: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-1e8b7aed-d080-450c-902a-2d8f175e40a7 container secret-volume-test: STEP: delete the pod Mar 22 13:29:14.386: INFO: Waiting for pod pod-secrets-1e8b7aed-d080-450c-902a-2d8f175e40a7 to disappear Mar 22 13:29:14.390: INFO: Pod pod-secrets-1e8b7aed-d080-450c-902a-2d8f175e40a7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:29:14.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4029" for this suite. Mar 22 13:29:20.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:29:20.515: INFO: namespace secrets-4029 deletion completed in 6.12077202s STEP: Destroying namespace "secret-namespace-5124" for this suite. Mar 22 13:29:26.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:29:26.606: INFO: namespace secret-namespace-5124 deletion completed in 6.091715468s • [SLOW TEST:16.402 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:29:26.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 22 13:29:30.708: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-e22f891d-da33-419d-96b1-31dadc1115bd,GenerateName:,Namespace:events-8325,SelfLink:/api/v1/namespaces/events-8325/pods/send-events-e22f891d-da33-419d-96b1-31dadc1115bd,UID:7e83a6de-b6e8-444f-9417-0d2d0161cf5b,ResourceVersion:1241248,Generation:0,CreationTimestamp:2020-03-22 13:29:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 668960938,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-m6vrb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m6vrb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-m6vrb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00350c550} {node.kubernetes.io/unreachable Exists NoExecute 0xc00350c570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:29:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:29:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:29:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 13:29:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.226,StartTime:2020-03-22 13:29:26 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-03-22 13:29:28 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://a19117999a985391316cc25b9c881177be68edc21d18073864502034d66fee6d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Mar 22 13:29:32.712: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 22 13:29:34.717: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:29:34.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8325" for this suite. Mar 22 13:30:18.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:30:18.981: INFO: namespace events-8325 deletion completed in 44.254288354s • [SLOW TEST:52.374 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:30:18.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 13:30:23.123: INFO: Waiting up to 5m0s for pod "client-envvars-b548b3d2-61e2-4512-896c-05555dd64451" in namespace "pods-8503" to be "success or failure" Mar 22 13:30:23.143: INFO: Pod "client-envvars-b548b3d2-61e2-4512-896c-05555dd64451": Phase="Pending", Reason="", readiness=false. Elapsed: 19.286089ms Mar 22 13:30:25.146: INFO: Pod "client-envvars-b548b3d2-61e2-4512-896c-05555dd64451": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023097602s Mar 22 13:30:27.150: INFO: Pod "client-envvars-b548b3d2-61e2-4512-896c-05555dd64451": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027166842s STEP: Saw pod success Mar 22 13:30:27.150: INFO: Pod "client-envvars-b548b3d2-61e2-4512-896c-05555dd64451" satisfied condition "success or failure" Mar 22 13:30:27.154: INFO: Trying to get logs from node iruya-worker pod client-envvars-b548b3d2-61e2-4512-896c-05555dd64451 container env3cont: STEP: delete the pod Mar 22 13:30:27.171: INFO: Waiting for pod client-envvars-b548b3d2-61e2-4512-896c-05555dd64451 to disappear Mar 22 13:30:27.176: INFO: Pod client-envvars-b548b3d2-61e2-4512-896c-05555dd64451 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:30:27.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8503" for this suite. Mar 22 13:31:17.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:31:17.263: INFO: namespace pods-8503 deletion completed in 50.0842991s • [SLOW TEST:58.281 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:31:17.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-9d39475b-f729-4766-b1fc-a2bd06d68dbb STEP: Creating a pod to test consume configMaps Mar 22 13:31:17.346: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fa1ad5fd-1fce-43e2-8e8c-75adafffa8b9" in namespace "projected-9549" to be "success or failure" Mar 22 13:31:17.356: INFO: Pod "pod-projected-configmaps-fa1ad5fd-1fce-43e2-8e8c-75adafffa8b9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.81332ms Mar 22 13:31:19.360: INFO: Pod "pod-projected-configmaps-fa1ad5fd-1fce-43e2-8e8c-75adafffa8b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014405426s Mar 22 13:31:21.365: INFO: Pod "pod-projected-configmaps-fa1ad5fd-1fce-43e2-8e8c-75adafffa8b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019215335s STEP: Saw pod success Mar 22 13:31:21.365: INFO: Pod "pod-projected-configmaps-fa1ad5fd-1fce-43e2-8e8c-75adafffa8b9" satisfied condition "success or failure" Mar 22 13:31:21.368: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-fa1ad5fd-1fce-43e2-8e8c-75adafffa8b9 container projected-configmap-volume-test: STEP: delete the pod Mar 22 13:31:21.382: INFO: Waiting for pod pod-projected-configmaps-fa1ad5fd-1fce-43e2-8e8c-75adafffa8b9 to disappear Mar 22 13:31:21.386: INFO: Pod pod-projected-configmaps-fa1ad5fd-1fce-43e2-8e8c-75adafffa8b9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:31:21.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9549" for this suite. Mar 22 13:31:27.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:31:27.494: INFO: namespace projected-9549 deletion completed in 6.104274386s • [SLOW TEST:10.230 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:31:27.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3974 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3974 STEP: Creating statefulset with conflicting port in namespace statefulset-3974 STEP: Waiting until pod test-pod will start running in namespace statefulset-3974 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3974 Mar 22 13:31:31.651: INFO: Observed stateful pod in namespace: statefulset-3974, name: ss-0, uid: 3bd4cead-6bf7-437c-86c2-cbb734e279a5, status phase: Pending. Waiting for statefulset controller to delete. Mar 22 13:31:32.194: INFO: Observed stateful pod in namespace: statefulset-3974, name: ss-0, uid: 3bd4cead-6bf7-437c-86c2-cbb734e279a5, status phase: Failed. Waiting for statefulset controller to delete. Mar 22 13:31:32.200: INFO: Observed stateful pod in namespace: statefulset-3974, name: ss-0, uid: 3bd4cead-6bf7-437c-86c2-cbb734e279a5, status phase: Failed. Waiting for statefulset controller to delete. Mar 22 13:31:32.213: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3974 STEP: Removing pod with conflicting port in namespace statefulset-3974 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3974 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 22 13:31:46.320: INFO: Deleting all statefulset in ns statefulset-3974 Mar 22 13:31:46.323: INFO: Scaling statefulset ss to 0 Mar 22 13:31:56.345: INFO: Waiting for statefulset status.replicas updated to 0 Mar 22 13:31:56.348: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:31:56.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3974" for this suite. Mar 22 13:32:02.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:32:02.489: INFO: namespace statefulset-3974 deletion completed in 6.128264263s • [SLOW TEST:34.994 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:32:02.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-9eacadfa-d839-4bb1-93e4-9949a21dc37b in namespace container-probe-1824 Mar 22 13:32:06.572: INFO: Started pod liveness-9eacadfa-d839-4bb1-93e4-9949a21dc37b in namespace container-probe-1824 STEP: checking the pod's current state and verifying that restartCount is present Mar 22 13:32:06.575: INFO: Initial restart count of pod liveness-9eacadfa-d839-4bb1-93e4-9949a21dc37b is 0 Mar 22 13:32:24.620: INFO: Restart count of pod container-probe-1824/liveness-9eacadfa-d839-4bb1-93e4-9949a21dc37b is now 1 (18.045061715s elapsed) Mar 22 13:32:44.726: INFO: Restart count of pod container-probe-1824/liveness-9eacadfa-d839-4bb1-93e4-9949a21dc37b is now 2 (38.150557528s elapsed) Mar 22 13:33:04.768: INFO: Restart count of pod container-probe-1824/liveness-9eacadfa-d839-4bb1-93e4-9949a21dc37b is now 3 (58.192644702s elapsed) Mar 22 13:33:24.824: INFO: Restart count of pod container-probe-1824/liveness-9eacadfa-d839-4bb1-93e4-9949a21dc37b is now 4 (1m18.248934405s elapsed) Mar 22 13:34:27.008: INFO: Restart count of pod container-probe-1824/liveness-9eacadfa-d839-4bb1-93e4-9949a21dc37b is now 5 (2m20.432828038s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:34:27.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1824" for this suite. Mar 22 13:34:33.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:34:33.136: INFO: namespace container-probe-1824 deletion completed in 6.104776143s • [SLOW TEST:150.647 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:34:33.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-f9fd29e6-b6cd-402b-8b07-d4aa9804314d STEP: Creating a pod to test consume configMaps Mar 22 13:34:33.222: INFO: Waiting up to 5m0s for pod "pod-configmaps-5d1ca8ee-ad85-4980-82ef-0ec0e0863c11" in namespace "configmap-7036" to be "success or failure" Mar 22 13:34:33.240: INFO: Pod "pod-configmaps-5d1ca8ee-ad85-4980-82ef-0ec0e0863c11": Phase="Pending", Reason="", readiness=false. Elapsed: 17.794576ms Mar 22 13:34:35.248: INFO: Pod "pod-configmaps-5d1ca8ee-ad85-4980-82ef-0ec0e0863c11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025804414s Mar 22 13:34:37.252: INFO: Pod "pod-configmaps-5d1ca8ee-ad85-4980-82ef-0ec0e0863c11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029820478s STEP: Saw pod success Mar 22 13:34:37.252: INFO: Pod "pod-configmaps-5d1ca8ee-ad85-4980-82ef-0ec0e0863c11" satisfied condition "success or failure" Mar 22 13:34:37.255: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-5d1ca8ee-ad85-4980-82ef-0ec0e0863c11 container configmap-volume-test: STEP: delete the pod Mar 22 13:34:37.298: INFO: Waiting for pod pod-configmaps-5d1ca8ee-ad85-4980-82ef-0ec0e0863c11 to disappear Mar 22 13:34:37.300: INFO: Pod pod-configmaps-5d1ca8ee-ad85-4980-82ef-0ec0e0863c11 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:34:37.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7036" for this suite. Mar 22 13:34:43.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:34:43.438: INFO: namespace configmap-7036 deletion completed in 6.134516887s • [SLOW TEST:10.301 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:34:43.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 22 13:34:43.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-3219' Mar 22 13:34:46.891: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 22 13:34:46.891: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Mar 22 13:34:50.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3219' Mar 22 13:34:51.011: INFO: stderr: "" Mar 22 13:34:51.011: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:34:51.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3219" for this suite. Mar 22 13:35:13.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:35:13.101: INFO: namespace kubectl-3219 deletion completed in 22.086493529s • [SLOW TEST:29.662 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:35:13.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 22 13:35:17.729: INFO: Successfully updated pod "annotationupdate317fae25-ae13-4e73-9650-886eedea0989" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:35:19.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9454" for this suite. Mar 22 13:35:41.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:35:41.854: INFO: namespace projected-9454 deletion completed in 22.093999382s • [SLOW TEST:28.752 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:35:41.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 22 13:35:41.905: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 22 13:35:41.921: INFO: Waiting for terminating namespaces to be deleted... Mar 22 13:35:41.923: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 22 13:35:41.929: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 22 13:35:41.929: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 13:35:41.929: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 22 13:35:41.929: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 13:35:41.929: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 22 13:35:41.955: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Mar 22 13:35:41.955: INFO: Container coredns ready: true, restart count 0 Mar 22 13:35:41.955: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Mar 22 13:35:41.955: INFO: Container coredns ready: true, restart count 0 Mar 22 13:35:41.955: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Mar 22 13:35:41.955: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 13:35:41.955: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Mar 22 13:35:41.955: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Mar 22 13:35:42.035: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Mar 22 13:35:42.035: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Mar 22 13:35:42.035: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Mar 22 13:35:42.035: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Mar 22 13:35:42.035: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Mar 22 13:35:42.035: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-59ad0aca-f832-417c-983b-f510c4d35b49.15fea3bae9178957], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8047/filler-pod-59ad0aca-f832-417c-983b-f510c4d35b49 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-59ad0aca-f832-417c-983b-f510c4d35b49.15fea3bb39b887a1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-59ad0aca-f832-417c-983b-f510c4d35b49.15fea3bb7d4afe1f], Reason = [Created], Message = [Created container filler-pod-59ad0aca-f832-417c-983b-f510c4d35b49] STEP: Considering event: Type = [Normal], Name = [filler-pod-59ad0aca-f832-417c-983b-f510c4d35b49.15fea3bb90482cdc], Reason = [Started], Message = [Started container filler-pod-59ad0aca-f832-417c-983b-f510c4d35b49] STEP: Considering event: Type = [Normal], Name = [filler-pod-5d727351-7226-4ab0-97db-9d2799d3b2bf.15fea3baec87e5f7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8047/filler-pod-5d727351-7226-4ab0-97db-9d2799d3b2bf to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-5d727351-7226-4ab0-97db-9d2799d3b2bf.15fea3bb65f406db], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-5d727351-7226-4ab0-97db-9d2799d3b2bf.15fea3bb9d214e77], Reason = [Created], Message = [Created container filler-pod-5d727351-7226-4ab0-97db-9d2799d3b2bf] STEP: Considering event: Type = [Normal], Name = [filler-pod-5d727351-7226-4ab0-97db-9d2799d3b2bf.15fea3bbab1b6652], Reason = [Started], Message = [Started container filler-pod-5d727351-7226-4ab0-97db-9d2799d3b2bf] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fea3bc53677d1d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:35:49.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8047" for this suite. Mar 22 13:35:55.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:35:55.336: INFO: namespace sched-pred-8047 deletion completed in 6.121116006s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:13.481 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:35:55.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Mar 22 13:35:55.409: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:35:55.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8947" for this suite. Mar 22 13:36:01.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:36:01.607: INFO: namespace kubectl-8947 deletion completed in 6.105082943s • [SLOW TEST:6.271 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:36:01.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:36:27.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8002" for this suite. Mar 22 13:36:33.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:36:33.871: INFO: namespace namespaces-8002 deletion completed in 6.079562306s STEP: Destroying namespace "nsdeletetest-2969" for this suite. Mar 22 13:36:33.872: INFO: Namespace nsdeletetest-2969 was already deleted STEP: Destroying namespace "nsdeletetest-5327" for this suite. Mar 22 13:36:39.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:36:39.961: INFO: namespace nsdeletetest-5327 deletion completed in 6.089139286s • [SLOW TEST:38.354 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:36:39.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-s4k99 in namespace proxy-9867 I0322 13:36:40.080982 6 runners.go:180] Created replication controller with name: proxy-service-s4k99, namespace: proxy-9867, replica count: 1 I0322 13:36:41.131638 6 runners.go:180] proxy-service-s4k99 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 13:36:42.131853 6 runners.go:180] proxy-service-s4k99 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 13:36:43.132052 6 runners.go:180] proxy-service-s4k99 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 13:36:44.132287 6 runners.go:180] proxy-service-s4k99 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0322 13:36:45.132521 6 runners.go:180] proxy-service-s4k99 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0322 13:36:46.132732 6 runners.go:180] proxy-service-s4k99 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0322 13:36:47.132945 6 runners.go:180] proxy-service-s4k99 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 22 13:36:47.136: INFO: setup took 7.095383758s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 22 13:36:47.142: INFO: (0) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 6.426376ms) Mar 22 13:36:47.143: INFO: (0) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 6.556ms) Mar 22 13:36:47.143: INFO: (0) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 6.617926ms) Mar 22 13:36:47.143: INFO: (0) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 6.534356ms) Mar 22 13:36:47.143: INFO: (0) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:1080/proxy/: test<... (200; 6.564369ms) Mar 22 13:36:47.143: INFO: (0) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9/proxy/: test (200; 6.702623ms) Mar 22 13:36:47.143: INFO: (0) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:1080/proxy/: ... (200; 6.876592ms) Mar 22 13:36:47.144: INFO: (0) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname2/proxy/: bar (200; 7.511881ms) Mar 22 13:36:47.144: INFO: (0) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname2/proxy/: bar (200; 8.01474ms) Mar 22 13:36:47.144: INFO: (0) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname1/proxy/: foo (200; 8.34073ms) Mar 22 13:36:47.150: INFO: (0) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname1/proxy/: foo (200; 13.665765ms) Mar 22 13:36:47.151: INFO: (0) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:460/proxy/: tls baz (200; 14.795477ms) Mar 22 13:36:47.151: INFO: (0) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:443/proxy/: test (200; 4.820051ms) Mar 22 13:36:47.156: INFO: (1) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 4.968351ms) Mar 22 13:36:47.156: INFO: (1) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:460/proxy/: tls baz (200; 5.013778ms) Mar 22 13:36:47.157: INFO: (1) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 5.412533ms) Mar 22 13:36:47.157: INFO: (1) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname1/proxy/: foo (200; 5.335072ms) Mar 22 13:36:47.157: INFO: (1) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname1/proxy/: tls baz (200; 5.384324ms) Mar 22 13:36:47.157: INFO: (1) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:443/proxy/: test<... (200; 6.884772ms) Mar 22 13:36:47.158: INFO: (1) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:1080/proxy/: ... (200; 6.888262ms) Mar 22 13:36:47.158: INFO: (1) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname2/proxy/: bar (200; 6.866383ms) Mar 22 13:36:47.162: INFO: (2) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 3.659405ms) Mar 22 13:36:47.162: INFO: (2) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 3.809979ms) Mar 22 13:36:47.162: INFO: (2) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:462/proxy/: tls qux (200; 3.942552ms) Mar 22 13:36:47.163: INFO: (2) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:1080/proxy/: test<... (200; 4.427853ms) Mar 22 13:36:47.163: INFO: (2) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:443/proxy/: ... (200; 4.463507ms) Mar 22 13:36:47.163: INFO: (2) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9/proxy/: test (200; 4.492742ms) Mar 22 13:36:47.163: INFO: (2) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 4.402702ms) Mar 22 13:36:47.163: INFO: (2) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 4.625833ms) Mar 22 13:36:47.163: INFO: (2) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:460/proxy/: tls baz (200; 4.545611ms) Mar 22 13:36:47.164: INFO: (2) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname2/proxy/: bar (200; 5.344937ms) Mar 22 13:36:47.164: INFO: (2) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname2/proxy/: bar (200; 5.416809ms) Mar 22 13:36:47.164: INFO: (2) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname1/proxy/: foo (200; 5.37614ms) Mar 22 13:36:47.164: INFO: (2) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname2/proxy/: tls qux (200; 5.582338ms) Mar 22 13:36:47.164: INFO: (2) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname1/proxy/: tls baz (200; 5.547115ms) Mar 22 13:36:47.164: INFO: (2) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname1/proxy/: foo (200; 5.527957ms) Mar 22 13:36:47.169: INFO: (3) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname1/proxy/: foo (200; 5.215109ms) Mar 22 13:36:47.169: INFO: (3) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 5.296179ms) Mar 22 13:36:47.170: INFO: (3) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 5.741172ms) Mar 22 13:36:47.170: INFO: (3) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:1080/proxy/: test<... (200; 5.906732ms) Mar 22 13:36:47.170: INFO: (3) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname1/proxy/: foo (200; 6.001467ms) Mar 22 13:36:47.170: INFO: (3) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname2/proxy/: bar (200; 6.072542ms) Mar 22 13:36:47.170: INFO: (3) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9/proxy/: test (200; 6.073583ms) Mar 22 13:36:47.170: INFO: (3) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:460/proxy/: tls baz (200; 6.045512ms) Mar 22 13:36:47.170: INFO: (3) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname2/proxy/: bar (200; 6.127738ms) Mar 22 13:36:47.170: INFO: (3) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:462/proxy/: tls qux (200; 6.213486ms) Mar 22 13:36:47.170: INFO: (3) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 6.103725ms) Mar 22 13:36:47.170: INFO: (3) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 6.175312ms) Mar 22 13:36:47.170: INFO: (3) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname1/proxy/: tls baz (200; 6.243635ms) Mar 22 13:36:47.170: INFO: (3) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname2/proxy/: tls qux (200; 6.186066ms) Mar 22 13:36:47.170: INFO: (3) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:443/proxy/: ... (200; 6.435666ms) Mar 22 13:36:47.201: INFO: (4) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 30.736302ms) Mar 22 13:36:47.203: INFO: (4) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname1/proxy/: foo (200; 32.387427ms) Mar 22 13:36:47.203: INFO: (4) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname1/proxy/: foo (200; 32.708513ms) Mar 22 13:36:47.203: INFO: (4) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 32.666783ms) Mar 22 13:36:47.203: INFO: (4) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:443/proxy/: ... (200; 33.198294ms) Mar 22 13:36:47.204: INFO: (4) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:1080/proxy/: test<... (200; 33.493834ms) Mar 22 13:36:47.204: INFO: (4) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:462/proxy/: tls qux (200; 33.744262ms) Mar 22 13:36:47.204: INFO: (4) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 33.740986ms) Mar 22 13:36:47.204: INFO: (4) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9/proxy/: test (200; 33.758101ms) Mar 22 13:36:47.204: INFO: (4) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:460/proxy/: tls baz (200; 33.832093ms) Mar 22 13:36:47.204: INFO: (4) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname2/proxy/: tls qux (200; 33.770036ms) Mar 22 13:36:47.204: INFO: (4) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname1/proxy/: tls baz (200; 33.805361ms) Mar 22 13:36:47.205: INFO: (4) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname2/proxy/: bar (200; 33.993411ms) Mar 22 13:36:47.209: INFO: (5) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:443/proxy/: test (200; 6.007869ms) Mar 22 13:36:47.211: INFO: (5) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 4.673951ms) Mar 22 13:36:47.211: INFO: (5) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname2/proxy/: bar (200; 5.019563ms) Mar 22 13:36:47.211: INFO: (5) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:1080/proxy/: ... (200; 4.796738ms) Mar 22 13:36:47.211: INFO: (5) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:462/proxy/: tls qux (200; 6.38513ms) Mar 22 13:36:47.211: INFO: (5) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname1/proxy/: foo (200; 5.754978ms) Mar 22 13:36:47.211: INFO: (5) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname2/proxy/: bar (200; 6.142812ms) Mar 22 13:36:47.211: INFO: (5) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 5.920478ms) Mar 22 13:36:47.211: INFO: (5) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname2/proxy/: tls qux (200; 5.021969ms) Mar 22 13:36:47.211: INFO: (5) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 5.538047ms) Mar 22 13:36:47.211: INFO: (5) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 6.048709ms) Mar 22 13:36:47.211: INFO: (5) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:1080/proxy/: test<... (200; 5.630989ms) Mar 22 13:36:47.213: INFO: (6) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:462/proxy/: tls qux (200; 2.141371ms) Mar 22 13:36:47.216: INFO: (6) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname2/proxy/: bar (200; 4.976652ms) Mar 22 13:36:47.217: INFO: (6) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 5.37217ms) Mar 22 13:36:47.217: INFO: (6) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:1080/proxy/: ... (200; 5.798073ms) Mar 22 13:36:47.217: INFO: (6) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:1080/proxy/: test<... (200; 5.689561ms) Mar 22 13:36:47.217: INFO: (6) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 5.710982ms) Mar 22 13:36:47.217: INFO: (6) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname2/proxy/: bar (200; 5.869967ms) Mar 22 13:36:47.217: INFO: (6) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 5.757199ms) Mar 22 13:36:47.217: INFO: (6) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 6.183451ms) Mar 22 13:36:47.217: INFO: (6) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9/proxy/: test (200; 6.104782ms) Mar 22 13:36:47.217: INFO: (6) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:460/proxy/: tls baz (200; 6.127595ms) Mar 22 13:36:47.219: INFO: (6) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:443/proxy/: ... (200; 6.254044ms) Mar 22 13:36:47.226: INFO: (7) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname1/proxy/: foo (200; 6.202735ms) Mar 22 13:36:47.226: INFO: (7) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9/proxy/: test (200; 6.329018ms) Mar 22 13:36:47.226: INFO: (7) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 6.280914ms) Mar 22 13:36:47.226: INFO: (7) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 6.266084ms) Mar 22 13:36:47.227: INFO: (7) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:462/proxy/: tls qux (200; 6.36591ms) Mar 22 13:36:47.227: INFO: (7) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:1080/proxy/: test<... (200; 6.351481ms) Mar 22 13:36:47.230: INFO: (8) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:462/proxy/: tls qux (200; 2.885349ms) Mar 22 13:36:47.231: INFO: (8) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname2/proxy/: bar (200; 4.254899ms) Mar 22 13:36:47.231: INFO: (8) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname1/proxy/: foo (200; 4.488842ms) Mar 22 13:36:47.231: INFO: (8) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname2/proxy/: bar (200; 4.537313ms) Mar 22 13:36:47.231: INFO: (8) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:460/proxy/: tls baz (200; 4.555074ms) Mar 22 13:36:47.232: INFO: (8) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 5.085951ms) Mar 22 13:36:47.232: INFO: (8) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname1/proxy/: tls baz (200; 5.098667ms) Mar 22 13:36:47.232: INFO: (8) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 5.149872ms) Mar 22 13:36:47.232: INFO: (8) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:1080/proxy/: ... (200; 5.090011ms) Mar 22 13:36:47.232: INFO: (8) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:1080/proxy/: test<... (200; 5.167379ms) Mar 22 13:36:47.232: INFO: (8) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname1/proxy/: foo (200; 5.111746ms) Mar 22 13:36:47.232: INFO: (8) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 5.234637ms) Mar 22 13:36:47.232: INFO: (8) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 5.247579ms) Mar 22 13:36:47.232: INFO: (8) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname2/proxy/: tls qux (200; 5.208674ms) Mar 22 13:36:47.232: INFO: (8) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:443/proxy/: test (200; 5.29693ms) Mar 22 13:36:47.237: INFO: (9) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:443/proxy/: test<... (200; 5.401251ms) Mar 22 13:36:47.238: INFO: (9) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname2/proxy/: bar (200; 5.565332ms) Mar 22 13:36:47.238: INFO: (9) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname2/proxy/: tls qux (200; 5.45578ms) Mar 22 13:36:47.238: INFO: (9) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 5.584064ms) Mar 22 13:36:47.238: INFO: (9) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9/proxy/: test (200; 5.59792ms) Mar 22 13:36:47.238: INFO: (9) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:460/proxy/: tls baz (200; 6.008651ms) Mar 22 13:36:47.238: INFO: (9) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:462/proxy/: tls qux (200; 6.07753ms) Mar 22 13:36:47.238: INFO: (9) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname1/proxy/: foo (200; 6.080186ms) Mar 22 13:36:47.238: INFO: (9) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 6.179534ms) Mar 22 13:36:47.238: INFO: (9) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:1080/proxy/: ... (200; 6.172015ms) Mar 22 13:36:47.238: INFO: (9) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 6.095104ms) Mar 22 13:36:47.242: INFO: (10) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 4.15607ms) Mar 22 13:36:47.242: INFO: (10) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname2/proxy/: tls qux (200; 4.132766ms) Mar 22 13:36:47.242: INFO: (10) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:462/proxy/: tls qux (200; 4.227582ms) Mar 22 13:36:47.243: INFO: (10) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:1080/proxy/: ... (200; 4.11183ms) Mar 22 13:36:47.243: INFO: (10) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 4.20728ms) Mar 22 13:36:47.243: INFO: (10) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:443/proxy/: test (200; 4.17807ms) Mar 22 13:36:47.243: INFO: (10) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 4.64764ms) Mar 22 13:36:47.243: INFO: (10) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 4.643307ms) Mar 22 13:36:47.243: INFO: (10) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:460/proxy/: tls baz (200; 4.646038ms) Mar 22 13:36:47.243: INFO: (10) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:1080/proxy/: test<... (200; 4.763344ms) Mar 22 13:36:47.243: INFO: (10) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname2/proxy/: bar (200; 5.048103ms) Mar 22 13:36:47.243: INFO: (10) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname1/proxy/: tls baz (200; 4.937458ms) Mar 22 13:36:47.243: INFO: (10) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname1/proxy/: foo (200; 5.078315ms) Mar 22 13:36:47.244: INFO: (10) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname1/proxy/: foo (200; 5.276173ms) Mar 22 13:36:47.244: INFO: (10) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname2/proxy/: bar (200; 5.601894ms) Mar 22 13:36:47.248: INFO: (11) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9/proxy/: test (200; 3.533017ms) Mar 22 13:36:47.249: INFO: (11) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname2/proxy/: bar (200; 5.314264ms) Mar 22 13:36:47.249: INFO: (11) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 5.312313ms) Mar 22 13:36:47.249: INFO: (11) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:1080/proxy/: ... (200; 5.347893ms) Mar 22 13:36:47.249: INFO: (11) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 5.218892ms) Mar 22 13:36:47.249: INFO: (11) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 5.326956ms) Mar 22 13:36:47.249: INFO: (11) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname1/proxy/: foo (200; 5.354993ms) Mar 22 13:36:47.249: INFO: (11) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:1080/proxy/: test<... (200; 5.259292ms) Mar 22 13:36:47.249: INFO: (11) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:443/proxy/: test<... (200; 4.593578ms) Mar 22 13:36:47.255: INFO: (12) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname2/proxy/: bar (200; 4.618741ms) Mar 22 13:36:47.255: INFO: (12) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 4.613216ms) Mar 22 13:36:47.255: INFO: (12) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:462/proxy/: tls qux (200; 4.726657ms) Mar 22 13:36:47.255: INFO: (12) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9/proxy/: test (200; 4.766341ms) Mar 22 13:36:47.256: INFO: (12) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname1/proxy/: foo (200; 4.842433ms) Mar 22 13:36:47.256: INFO: (12) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname1/proxy/: foo (200; 4.864936ms) Mar 22 13:36:47.256: INFO: (12) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname1/proxy/: tls baz (200; 4.85019ms) Mar 22 13:36:47.256: INFO: (12) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:1080/proxy/: ... (200; 4.870749ms) Mar 22 13:36:47.256: INFO: (12) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:460/proxy/: tls baz (200; 5.033689ms) Mar 22 13:36:47.256: INFO: (12) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname2/proxy/: tls qux (200; 5.078906ms) Mar 22 13:36:47.260: INFO: (13) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 4.290954ms) Mar 22 13:36:47.260: INFO: (13) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 4.443343ms) Mar 22 13:36:47.260: INFO: (13) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:1080/proxy/: test<... (200; 4.393623ms) Mar 22 13:36:47.260: INFO: (13) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname1/proxy/: foo (200; 4.624904ms) Mar 22 13:36:47.260: INFO: (13) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname1/proxy/: tls baz (200; 4.577836ms) Mar 22 13:36:47.260: INFO: (13) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:1080/proxy/: ... (200; 4.700926ms) Mar 22 13:36:47.260: INFO: (13) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:460/proxy/: tls baz (200; 4.679844ms) Mar 22 13:36:47.260: INFO: (13) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9/proxy/: test (200; 4.676141ms) Mar 22 13:36:47.261: INFO: (13) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 4.895205ms) Mar 22 13:36:47.261: INFO: (13) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 4.992753ms) Mar 22 13:36:47.261: INFO: (13) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname1/proxy/: foo (200; 4.984827ms) Mar 22 13:36:47.261: INFO: (13) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:443/proxy/: ... (200; 2.788415ms) Mar 22 13:36:47.264: INFO: (14) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 2.962046ms) Mar 22 13:36:47.264: INFO: (14) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:460/proxy/: tls baz (200; 3.166566ms) Mar 22 13:36:47.264: INFO: (14) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 3.29106ms) Mar 22 13:36:47.264: INFO: (14) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9/proxy/: test (200; 3.379853ms) Mar 22 13:36:47.265: INFO: (14) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname2/proxy/: tls qux (200; 3.753543ms) Mar 22 13:36:47.265: INFO: (14) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:443/proxy/: test<... (200; 4.105485ms) Mar 22 13:36:47.265: INFO: (14) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname2/proxy/: bar (200; 4.238635ms) Mar 22 13:36:47.265: INFO: (14) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname1/proxy/: foo (200; 4.423002ms) Mar 22 13:36:47.265: INFO: (14) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname2/proxy/: bar (200; 4.341708ms) Mar 22 13:36:47.268: INFO: (15) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:1080/proxy/: test<... (200; 2.599971ms) Mar 22 13:36:47.269: INFO: (15) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 3.460879ms) Mar 22 13:36:47.269: INFO: (15) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 3.568755ms) Mar 22 13:36:47.269: INFO: (15) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 3.665132ms) Mar 22 13:36:47.269: INFO: (15) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:443/proxy/: test (200; 3.619363ms) Mar 22 13:36:47.270: INFO: (15) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname1/proxy/: tls baz (200; 3.987587ms) Mar 22 13:36:47.270: INFO: (15) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname2/proxy/: bar (200; 4.039361ms) Mar 22 13:36:47.270: INFO: (15) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname2/proxy/: bar (200; 3.974079ms) Mar 22 13:36:47.270: INFO: (15) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname1/proxy/: foo (200; 4.062885ms) Mar 22 13:36:47.270: INFO: (15) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname1/proxy/: foo (200; 4.215637ms) Mar 22 13:36:47.270: INFO: (15) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:1080/proxy/: ... (200; 4.208727ms) Mar 22 13:36:47.270: INFO: (15) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:462/proxy/: tls qux (200; 4.504913ms) Mar 22 13:36:47.270: INFO: (15) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname2/proxy/: tls qux (200; 4.686623ms) Mar 22 13:36:47.273: INFO: (16) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:462/proxy/: tls qux (200; 3.193523ms) Mar 22 13:36:47.274: INFO: (16) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 3.801891ms) Mar 22 13:36:47.274: INFO: (16) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:1080/proxy/: ... (200; 4.11149ms) Mar 22 13:36:47.275: INFO: (16) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:460/proxy/: tls baz (200; 4.307006ms) Mar 22 13:36:47.275: INFO: (16) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 4.793114ms) Mar 22 13:36:47.275: INFO: (16) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 4.659285ms) Mar 22 13:36:47.275: INFO: (16) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname2/proxy/: bar (200; 4.859467ms) Mar 22 13:36:47.275: INFO: (16) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:443/proxy/: test<... (200; 4.828332ms) Mar 22 13:36:47.275: INFO: (16) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9/proxy/: test (200; 4.885575ms) Mar 22 13:36:47.275: INFO: (16) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 4.966733ms) Mar 22 13:36:47.275: INFO: (16) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname2/proxy/: bar (200; 4.961931ms) Mar 22 13:36:47.275: INFO: (16) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname1/proxy/: tls baz (200; 4.985869ms) Mar 22 13:36:47.276: INFO: (16) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname1/proxy/: foo (200; 5.168879ms) Mar 22 13:36:47.276: INFO: (16) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname2/proxy/: tls qux (200; 5.236463ms) Mar 22 13:36:47.279: INFO: (17) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 3.18698ms) Mar 22 13:36:47.279: INFO: (17) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 3.223973ms) Mar 22 13:36:47.279: INFO: (17) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 3.272831ms) Mar 22 13:36:47.279: INFO: (17) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9/proxy/: test (200; 3.362339ms) Mar 22 13:36:47.279: INFO: (17) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:460/proxy/: tls baz (200; 3.439182ms) Mar 22 13:36:47.279: INFO: (17) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:1080/proxy/: test<... (200; 3.429003ms) Mar 22 13:36:47.279: INFO: (17) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:443/proxy/: ... (200; 3.509153ms) Mar 22 13:36:47.279: INFO: (17) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 3.617626ms) Mar 22 13:36:47.279: INFO: (17) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname1/proxy/: tls baz (200; 3.554098ms) Mar 22 13:36:47.279: INFO: (17) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname2/proxy/: bar (200; 3.552603ms) Mar 22 13:36:47.279: INFO: (17) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname1/proxy/: foo (200; 3.593275ms) Mar 22 13:36:47.279: INFO: (17) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname2/proxy/: bar (200; 3.722603ms) Mar 22 13:36:47.279: INFO: (17) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname1/proxy/: foo (200; 3.785452ms) Mar 22 13:36:47.279: INFO: (17) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname2/proxy/: tls qux (200; 3.75842ms) Mar 22 13:36:47.283: INFO: (18) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:1080/proxy/: test<... (200; 3.323098ms) Mar 22 13:36:47.283: INFO: (18) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:460/proxy/: tls baz (200; 3.384129ms) Mar 22 13:36:47.283: INFO: (18) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 3.327612ms) Mar 22 13:36:47.283: INFO: (18) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:462/proxy/: tls qux (200; 3.356805ms) Mar 22 13:36:47.283: INFO: (18) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 3.375208ms) Mar 22 13:36:47.283: INFO: (18) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:1080/proxy/: ... (200; 3.316935ms) Mar 22 13:36:47.283: INFO: (18) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:443/proxy/: test (200; 3.796047ms) Mar 22 13:36:47.283: INFO: (18) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname1/proxy/: foo (200; 3.886208ms) Mar 22 13:36:47.283: INFO: (18) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname2/proxy/: bar (200; 3.912554ms) Mar 22 13:36:47.284: INFO: (18) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname1/proxy/: tls baz (200; 3.940944ms) Mar 22 13:36:47.284: INFO: (18) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname2/proxy/: bar (200; 4.009966ms) Mar 22 13:36:47.284: INFO: (18) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname2/proxy/: tls qux (200; 4.02494ms) Mar 22 13:36:47.284: INFO: (18) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname1/proxy/: foo (200; 4.016392ms) Mar 22 13:36:47.286: INFO: (19) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 2.248904ms) Mar 22 13:36:47.286: INFO: (19) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:460/proxy/: tls baz (200; 2.117126ms) Mar 22 13:36:47.286: INFO: (19) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:1080/proxy/: test<... (200; 2.360217ms) Mar 22 13:36:47.288: INFO: (19) /api/v1/namespaces/proxy-9867/pods/proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 3.760527ms) Mar 22 13:36:47.288: INFO: (19) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:160/proxy/: foo (200; 3.70867ms) Mar 22 13:36:47.288: INFO: (19) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:162/proxy/: bar (200; 4.112475ms) Mar 22 13:36:47.288: INFO: (19) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:462/proxy/: tls qux (200; 4.0929ms) Mar 22 13:36:47.288: INFO: (19) /api/v1/namespaces/proxy-9867/pods/https:proxy-service-s4k99-hr8g9:443/proxy/: test (200; 3.977446ms) Mar 22 13:36:47.288: INFO: (19) /api/v1/namespaces/proxy-9867/pods/http:proxy-service-s4k99-hr8g9:1080/proxy/: ... (200; 4.274922ms) Mar 22 13:36:47.289: INFO: (19) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname2/proxy/: bar (200; 4.272766ms) Mar 22 13:36:47.289: INFO: (19) /api/v1/namespaces/proxy-9867/services/proxy-service-s4k99:portname1/proxy/: foo (200; 4.121143ms) Mar 22 13:36:47.289: INFO: (19) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname1/proxy/: tls baz (200; 4.047588ms) Mar 22 13:36:47.289: INFO: (19) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname1/proxy/: foo (200; 4.936822ms) Mar 22 13:36:47.289: INFO: (19) /api/v1/namespaces/proxy-9867/services/https:proxy-service-s4k99:tlsportname2/proxy/: tls qux (200; 4.833396ms) Mar 22 13:36:47.289: INFO: (19) /api/v1/namespaces/proxy-9867/services/http:proxy-service-s4k99:portname2/proxy/: bar (200; 4.898054ms) STEP: deleting ReplicationController proxy-service-s4k99 in namespace proxy-9867, will wait for the garbage collector to delete the pods Mar 22 13:36:47.347: INFO: Deleting ReplicationController proxy-service-s4k99 took: 6.148578ms Mar 22 13:36:47.647: INFO: Terminating ReplicationController proxy-service-s4k99 pods took: 300.300155ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:36:50.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9867" for this suite. Mar 22 13:36:56.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:36:56.467: INFO: namespace proxy-9867 deletion completed in 6.114552403s • [SLOW TEST:16.505 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:36:56.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-d252631c-075b-404f-840c-5ec3f88f0b92 STEP: Creating a pod to test consume configMaps Mar 22 13:36:56.554: INFO: Waiting up to 5m0s for pod "pod-configmaps-578e128e-fb2e-4a81-a95e-fb525d127628" in namespace "configmap-4105" to be "success or failure" Mar 22 13:36:56.572: INFO: Pod "pod-configmaps-578e128e-fb2e-4a81-a95e-fb525d127628": Phase="Pending", Reason="", readiness=false. Elapsed: 17.761387ms Mar 22 13:36:58.576: INFO: Pod "pod-configmaps-578e128e-fb2e-4a81-a95e-fb525d127628": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021894461s Mar 22 13:37:00.580: INFO: Pod "pod-configmaps-578e128e-fb2e-4a81-a95e-fb525d127628": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026017277s STEP: Saw pod success Mar 22 13:37:00.580: INFO: Pod "pod-configmaps-578e128e-fb2e-4a81-a95e-fb525d127628" satisfied condition "success or failure" Mar 22 13:37:00.584: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-578e128e-fb2e-4a81-a95e-fb525d127628 container configmap-volume-test: STEP: delete the pod Mar 22 13:37:00.603: INFO: Waiting for pod pod-configmaps-578e128e-fb2e-4a81-a95e-fb525d127628 to disappear Mar 22 13:37:00.626: INFO: Pod pod-configmaps-578e128e-fb2e-4a81-a95e-fb525d127628 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:37:00.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4105" for this suite. Mar 22 13:37:06.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:37:06.716: INFO: namespace configmap-4105 deletion completed in 6.085947585s • [SLOW TEST:10.249 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:37:06.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-64c43660-d43c-454c-bb1a-6bb2ea09667c STEP: Creating secret with name s-test-opt-upd-9096bf60-3126-4ee3-a9f1-5201288376ee STEP: Creating the pod STEP: Deleting secret s-test-opt-del-64c43660-d43c-454c-bb1a-6bb2ea09667c STEP: Updating secret s-test-opt-upd-9096bf60-3126-4ee3-a9f1-5201288376ee STEP: Creating secret with name s-test-opt-create-1cd7a77f-2687-465d-bcb4-b92c890fd453 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:38:17.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4259" for this suite. Mar 22 13:38:39.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:38:39.277: INFO: namespace projected-4259 deletion completed in 22.10249206s • [SLOW TEST:92.560 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:38:39.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0322 13:38:40.539409 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 22 13:38:40.539: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:38:40.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5914" for this suite. Mar 22 13:38:46.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:38:46.634: INFO: namespace gc-5914 deletion completed in 6.091336169s • [SLOW TEST:7.357 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:38:46.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Mar 22 13:38:46.710: INFO: Waiting up to 5m0s for pod "client-containers-b2d463f3-0a3c-4821-a1a8-31f4a9b30188" in namespace "containers-9052" to be "success or failure" Mar 22 13:38:46.763: INFO: Pod "client-containers-b2d463f3-0a3c-4821-a1a8-31f4a9b30188": Phase="Pending", Reason="", readiness=false. Elapsed: 52.180595ms Mar 22 13:38:48.766: INFO: Pod "client-containers-b2d463f3-0a3c-4821-a1a8-31f4a9b30188": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055958064s Mar 22 13:38:50.771: INFO: Pod "client-containers-b2d463f3-0a3c-4821-a1a8-31f4a9b30188": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060140719s STEP: Saw pod success Mar 22 13:38:50.771: INFO: Pod "client-containers-b2d463f3-0a3c-4821-a1a8-31f4a9b30188" satisfied condition "success or failure" Mar 22 13:38:50.774: INFO: Trying to get logs from node iruya-worker pod client-containers-b2d463f3-0a3c-4821-a1a8-31f4a9b30188 container test-container: STEP: delete the pod Mar 22 13:38:50.796: INFO: Waiting for pod client-containers-b2d463f3-0a3c-4821-a1a8-31f4a9b30188 to disappear Mar 22 13:38:50.800: INFO: Pod client-containers-b2d463f3-0a3c-4821-a1a8-31f4a9b30188 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:38:50.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9052" for this suite. Mar 22 13:38:56.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:38:56.895: INFO: namespace containers-9052 deletion completed in 6.090505537s • [SLOW TEST:10.261 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:38:56.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0322 13:39:07.716064 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 22 13:39:07.716: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:39:07.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5112" for this suite. Mar 22 13:39:13.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:39:14.201: INFO: namespace gc-5112 deletion completed in 6.4823445s • [SLOW TEST:17.307 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:39:14.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-f677732c-6eb8-419d-bab0-c85cf0cd690c STEP: Creating a pod to test consume secrets Mar 22 13:39:14.557: INFO: Waiting up to 5m0s for pod "pod-secrets-9f785419-6dc9-4bad-8efc-ad85390a0c98" in namespace "secrets-4423" to be "success or failure" Mar 22 13:39:14.616: INFO: Pod "pod-secrets-9f785419-6dc9-4bad-8efc-ad85390a0c98": Phase="Pending", Reason="", readiness=false. Elapsed: 58.922668ms Mar 22 13:39:16.620: INFO: Pod "pod-secrets-9f785419-6dc9-4bad-8efc-ad85390a0c98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063377877s Mar 22 13:39:18.625: INFO: Pod "pod-secrets-9f785419-6dc9-4bad-8efc-ad85390a0c98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068413969s STEP: Saw pod success Mar 22 13:39:18.626: INFO: Pod "pod-secrets-9f785419-6dc9-4bad-8efc-ad85390a0c98" satisfied condition "success or failure" Mar 22 13:39:18.628: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-9f785419-6dc9-4bad-8efc-ad85390a0c98 container secret-volume-test: STEP: delete the pod Mar 22 13:39:18.668: INFO: Waiting for pod pod-secrets-9f785419-6dc9-4bad-8efc-ad85390a0c98 to disappear Mar 22 13:39:18.690: INFO: Pod pod-secrets-9f785419-6dc9-4bad-8efc-ad85390a0c98 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:39:18.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4423" for this suite. Mar 22 13:39:24.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:39:24.813: INFO: namespace secrets-4423 deletion completed in 6.119015741s • [SLOW TEST:10.611 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:39:24.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 22 13:39:27.902: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:39:27.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7768" for this suite. Mar 22 13:39:33.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:39:34.040: INFO: namespace container-runtime-7768 deletion completed in 6.0945628s • [SLOW TEST:9.227 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:39:34.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Mar 22 13:39:38.126: INFO: Pod pod-hostip-eadcd4e9-d5f5-4203-bef6-deed5d22e879 has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:39:38.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3425" for this suite. Mar 22 13:40:00.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:40:00.225: INFO: namespace pods-3425 deletion completed in 22.095000726s • [SLOW TEST:26.185 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:40:00.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 22 13:40:10.322: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5107 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 13:40:10.322: INFO: >>> kubeConfig: /root/.kube/config I0322 13:40:10.353246 6 log.go:172] (0xc00169a2c0) (0xc0010d2b40) Create stream I0322 13:40:10.353279 6 log.go:172] (0xc00169a2c0) (0xc0010d2b40) Stream added, broadcasting: 1 I0322 13:40:10.354855 6 log.go:172] (0xc00169a2c0) Reply frame received for 1 I0322 13:40:10.354882 6 log.go:172] (0xc00169a2c0) (0xc002c68c80) Create stream I0322 13:40:10.354891 6 log.go:172] (0xc00169a2c0) (0xc002c68c80) Stream added, broadcasting: 3 I0322 13:40:10.355558 6 log.go:172] (0xc00169a2c0) Reply frame received for 3 I0322 13:40:10.355585 6 log.go:172] (0xc00169a2c0) (0xc0021685a0) Create stream I0322 13:40:10.355594 6 log.go:172] (0xc00169a2c0) (0xc0021685a0) Stream added, broadcasting: 5 I0322 13:40:10.356183 6 log.go:172] (0xc00169a2c0) Reply frame received for 5 I0322 13:40:10.436025 6 log.go:172] (0xc00169a2c0) Data frame received for 5 I0322 13:40:10.436058 6 log.go:172] (0xc0021685a0) (5) Data frame handling I0322 13:40:10.436077 6 log.go:172] (0xc00169a2c0) Data frame received for 3 I0322 13:40:10.436083 6 log.go:172] (0xc002c68c80) (3) Data frame handling I0322 13:40:10.436091 6 log.go:172] (0xc002c68c80) (3) Data frame sent I0322 13:40:10.436109 6 log.go:172] (0xc00169a2c0) Data frame received for 3 I0322 13:40:10.436115 6 log.go:172] (0xc002c68c80) (3) Data frame handling I0322 13:40:10.438097 6 log.go:172] (0xc00169a2c0) Data frame received for 1 I0322 13:40:10.438116 6 log.go:172] (0xc0010d2b40) (1) Data frame handling I0322 13:40:10.438130 6 log.go:172] (0xc0010d2b40) (1) Data frame sent I0322 13:40:10.438149 6 log.go:172] (0xc00169a2c0) (0xc0010d2b40) Stream removed, broadcasting: 1 I0322 13:40:10.438250 6 log.go:172] (0xc00169a2c0) Go away received I0322 13:40:10.438317 6 log.go:172] (0xc00169a2c0) (0xc0010d2b40) Stream removed, broadcasting: 1 I0322 13:40:10.438362 6 log.go:172] (0xc00169a2c0) (0xc002c68c80) Stream removed, broadcasting: 3 I0322 13:40:10.438379 6 log.go:172] (0xc00169a2c0) (0xc0021685a0) Stream removed, broadcasting: 5 Mar 22 13:40:10.438: INFO: Exec stderr: "" Mar 22 13:40:10.438: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5107 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 13:40:10.438: INFO: >>> kubeConfig: /root/.kube/config I0322 13:40:10.476672 6 log.go:172] (0xc000e786e0) (0xc0010a60a0) Create stream I0322 13:40:10.476706 6 log.go:172] (0xc000e786e0) (0xc0010a60a0) Stream added, broadcasting: 1 I0322 13:40:10.479323 6 log.go:172] (0xc000e786e0) Reply frame received for 1 I0322 13:40:10.479356 6 log.go:172] (0xc000e786e0) (0xc001a50000) Create stream I0322 13:40:10.479367 6 log.go:172] (0xc000e786e0) (0xc001a50000) Stream added, broadcasting: 3 I0322 13:40:10.480292 6 log.go:172] (0xc000e786e0) Reply frame received for 3 I0322 13:40:10.480321 6 log.go:172] (0xc000e786e0) (0xc002168820) Create stream I0322 13:40:10.480328 6 log.go:172] (0xc000e786e0) (0xc002168820) Stream added, broadcasting: 5 I0322 13:40:10.481267 6 log.go:172] (0xc000e786e0) Reply frame received for 5 I0322 13:40:10.540786 6 log.go:172] (0xc000e786e0) Data frame received for 5 I0322 13:40:10.540834 6 log.go:172] (0xc002168820) (5) Data frame handling I0322 13:40:10.540858 6 log.go:172] (0xc000e786e0) Data frame received for 3 I0322 13:40:10.540872 6 log.go:172] (0xc001a50000) (3) Data frame handling I0322 13:40:10.540885 6 log.go:172] (0xc001a50000) (3) Data frame sent I0322 13:40:10.540898 6 log.go:172] (0xc000e786e0) Data frame received for 3 I0322 13:40:10.540914 6 log.go:172] (0xc001a50000) (3) Data frame handling I0322 13:40:10.542955 6 log.go:172] (0xc000e786e0) Data frame received for 1 I0322 13:40:10.542976 6 log.go:172] (0xc0010a60a0) (1) Data frame handling I0322 13:40:10.542988 6 log.go:172] (0xc0010a60a0) (1) Data frame sent I0322 13:40:10.543000 6 log.go:172] (0xc000e786e0) (0xc0010a60a0) Stream removed, broadcasting: 1 I0322 13:40:10.543081 6 log.go:172] (0xc000e786e0) (0xc0010a60a0) Stream removed, broadcasting: 1 I0322 13:40:10.543126 6 log.go:172] (0xc000e786e0) Go away received I0322 13:40:10.543178 6 log.go:172] (0xc000e786e0) (0xc001a50000) Stream removed, broadcasting: 3 I0322 13:40:10.543218 6 log.go:172] (0xc000e786e0) (0xc002168820) Stream removed, broadcasting: 5 Mar 22 13:40:10.543: INFO: Exec stderr: "" Mar 22 13:40:10.543: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5107 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 13:40:10.543: INFO: >>> kubeConfig: /root/.kube/config I0322 13:40:10.584090 6 log.go:172] (0xc00173d8c0) (0xc002168a00) Create stream I0322 13:40:10.584129 6 log.go:172] (0xc00173d8c0) (0xc002168a00) Stream added, broadcasting: 1 I0322 13:40:10.587451 6 log.go:172] (0xc00173d8c0) Reply frame received for 1 I0322 13:40:10.587509 6 log.go:172] (0xc00173d8c0) (0xc002c68d20) Create stream I0322 13:40:10.587532 6 log.go:172] (0xc00173d8c0) (0xc002c68d20) Stream added, broadcasting: 3 I0322 13:40:10.588483 6 log.go:172] (0xc00173d8c0) Reply frame received for 3 I0322 13:40:10.588525 6 log.go:172] (0xc00173d8c0) (0xc001a500a0) Create stream I0322 13:40:10.588538 6 log.go:172] (0xc00173d8c0) (0xc001a500a0) Stream added, broadcasting: 5 I0322 13:40:10.589699 6 log.go:172] (0xc00173d8c0) Reply frame received for 5 I0322 13:40:10.640906 6 log.go:172] (0xc00173d8c0) Data frame received for 3 I0322 13:40:10.640938 6 log.go:172] (0xc002c68d20) (3) Data frame handling I0322 13:40:10.640960 6 log.go:172] (0xc002c68d20) (3) Data frame sent I0322 13:40:10.641042 6 log.go:172] (0xc00173d8c0) Data frame received for 3 I0322 13:40:10.641107 6 log.go:172] (0xc002c68d20) (3) Data frame handling I0322 13:40:10.641290 6 log.go:172] (0xc00173d8c0) Data frame received for 5 I0322 13:40:10.641305 6 log.go:172] (0xc001a500a0) (5) Data frame handling I0322 13:40:10.642819 6 log.go:172] (0xc00173d8c0) Data frame received for 1 I0322 13:40:10.642845 6 log.go:172] (0xc002168a00) (1) Data frame handling I0322 13:40:10.642861 6 log.go:172] (0xc002168a00) (1) Data frame sent I0322 13:40:10.642875 6 log.go:172] (0xc00173d8c0) (0xc002168a00) Stream removed, broadcasting: 1 I0322 13:40:10.642897 6 log.go:172] (0xc00173d8c0) Go away received I0322 13:40:10.643053 6 log.go:172] (0xc00173d8c0) (0xc002168a00) Stream removed, broadcasting: 1 I0322 13:40:10.643089 6 log.go:172] (0xc00173d8c0) (0xc002c68d20) Stream removed, broadcasting: 3 I0322 13:40:10.643113 6 log.go:172] (0xc00173d8c0) (0xc001a500a0) Stream removed, broadcasting: 5 Mar 22 13:40:10.643: INFO: Exec stderr: "" Mar 22 13:40:10.643: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5107 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 13:40:10.643: INFO: >>> kubeConfig: /root/.kube/config I0322 13:40:10.675934 6 log.go:172] (0xc001bfc160) (0xc002168e60) Create stream I0322 13:40:10.675973 6 log.go:172] (0xc001bfc160) (0xc002168e60) Stream added, broadcasting: 1 I0322 13:40:10.678834 6 log.go:172] (0xc001bfc160) Reply frame received for 1 I0322 13:40:10.678875 6 log.go:172] (0xc001bfc160) (0xc001a50140) Create stream I0322 13:40:10.678890 6 log.go:172] (0xc001bfc160) (0xc001a50140) Stream added, broadcasting: 3 I0322 13:40:10.680057 6 log.go:172] (0xc001bfc160) Reply frame received for 3 I0322 13:40:10.680112 6 log.go:172] (0xc001bfc160) (0xc001a50320) Create stream I0322 13:40:10.680125 6 log.go:172] (0xc001bfc160) (0xc001a50320) Stream added, broadcasting: 5 I0322 13:40:10.681267 6 log.go:172] (0xc001bfc160) Reply frame received for 5 I0322 13:40:10.744376 6 log.go:172] (0xc001bfc160) Data frame received for 5 I0322 13:40:10.744400 6 log.go:172] (0xc001a50320) (5) Data frame handling I0322 13:40:10.744449 6 log.go:172] (0xc001bfc160) Data frame received for 3 I0322 13:40:10.744497 6 log.go:172] (0xc001a50140) (3) Data frame handling I0322 13:40:10.744527 6 log.go:172] (0xc001a50140) (3) Data frame sent I0322 13:40:10.744744 6 log.go:172] (0xc001bfc160) Data frame received for 3 I0322 13:40:10.744787 6 log.go:172] (0xc001a50140) (3) Data frame handling I0322 13:40:10.746687 6 log.go:172] (0xc001bfc160) Data frame received for 1 I0322 13:40:10.746705 6 log.go:172] (0xc002168e60) (1) Data frame handling I0322 13:40:10.746716 6 log.go:172] (0xc002168e60) (1) Data frame sent I0322 13:40:10.746724 6 log.go:172] (0xc001bfc160) (0xc002168e60) Stream removed, broadcasting: 1 I0322 13:40:10.746790 6 log.go:172] (0xc001bfc160) (0xc002168e60) Stream removed, broadcasting: 1 I0322 13:40:10.746801 6 log.go:172] (0xc001bfc160) (0xc001a50140) Stream removed, broadcasting: 3 I0322 13:40:10.746806 6 log.go:172] (0xc001bfc160) (0xc001a50320) Stream removed, broadcasting: 5 Mar 22 13:40:10.746: INFO: Exec stderr: "" I0322 13:40:10.746831 6 log.go:172] (0xc001bfc160) Go away received STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 22 13:40:10.746: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5107 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 13:40:10.746: INFO: >>> kubeConfig: /root/.kube/config I0322 13:40:10.781993 6 log.go:172] (0xc001bfcd10) (0xc002169220) Create stream I0322 13:40:10.782016 6 log.go:172] (0xc001bfcd10) (0xc002169220) Stream added, broadcasting: 1 I0322 13:40:10.786601 6 log.go:172] (0xc001bfcd10) Reply frame received for 1 I0322 13:40:10.786719 6 log.go:172] (0xc001bfcd10) (0xc0010d2be0) Create stream I0322 13:40:10.786798 6 log.go:172] (0xc001bfcd10) (0xc0010d2be0) Stream added, broadcasting: 3 I0322 13:40:10.788561 6 log.go:172] (0xc001bfcd10) Reply frame received for 3 I0322 13:40:10.788601 6 log.go:172] (0xc001bfcd10) (0xc002c68dc0) Create stream I0322 13:40:10.788615 6 log.go:172] (0xc001bfcd10) (0xc002c68dc0) Stream added, broadcasting: 5 I0322 13:40:10.789601 6 log.go:172] (0xc001bfcd10) Reply frame received for 5 I0322 13:40:10.850923 6 log.go:172] (0xc001bfcd10) Data frame received for 3 I0322 13:40:10.850973 6 log.go:172] (0xc0010d2be0) (3) Data frame handling I0322 13:40:10.850988 6 log.go:172] (0xc0010d2be0) (3) Data frame sent I0322 13:40:10.851002 6 log.go:172] (0xc001bfcd10) Data frame received for 3 I0322 13:40:10.851014 6 log.go:172] (0xc0010d2be0) (3) Data frame handling I0322 13:40:10.851088 6 log.go:172] (0xc001bfcd10) Data frame received for 5 I0322 13:40:10.851171 6 log.go:172] (0xc002c68dc0) (5) Data frame handling I0322 13:40:10.852607 6 log.go:172] (0xc001bfcd10) Data frame received for 1 I0322 13:40:10.852637 6 log.go:172] (0xc002169220) (1) Data frame handling I0322 13:40:10.852658 6 log.go:172] (0xc002169220) (1) Data frame sent I0322 13:40:10.852679 6 log.go:172] (0xc001bfcd10) (0xc002169220) Stream removed, broadcasting: 1 I0322 13:40:10.852714 6 log.go:172] (0xc001bfcd10) Go away received I0322 13:40:10.852814 6 log.go:172] (0xc001bfcd10) (0xc002169220) Stream removed, broadcasting: 1 I0322 13:40:10.852839 6 log.go:172] (0xc001bfcd10) (0xc0010d2be0) Stream removed, broadcasting: 3 I0322 13:40:10.852852 6 log.go:172] (0xc001bfcd10) (0xc002c68dc0) Stream removed, broadcasting: 5 Mar 22 13:40:10.852: INFO: Exec stderr: "" Mar 22 13:40:10.852: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5107 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 13:40:10.852: INFO: >>> kubeConfig: /root/.kube/config I0322 13:40:10.886994 6 log.go:172] (0xc000d51810) (0xc001a50960) Create stream I0322 13:40:10.887021 6 log.go:172] (0xc000d51810) (0xc001a50960) Stream added, broadcasting: 1 I0322 13:40:10.889417 6 log.go:172] (0xc000d51810) Reply frame received for 1 I0322 13:40:10.889466 6 log.go:172] (0xc000d51810) (0xc0021692c0) Create stream I0322 13:40:10.889481 6 log.go:172] (0xc000d51810) (0xc0021692c0) Stream added, broadcasting: 3 I0322 13:40:10.890396 6 log.go:172] (0xc000d51810) Reply frame received for 3 I0322 13:40:10.890450 6 log.go:172] (0xc000d51810) (0xc002c68e60) Create stream I0322 13:40:10.890466 6 log.go:172] (0xc000d51810) (0xc002c68e60) Stream added, broadcasting: 5 I0322 13:40:10.891281 6 log.go:172] (0xc000d51810) Reply frame received for 5 I0322 13:40:10.966908 6 log.go:172] (0xc000d51810) Data frame received for 5 I0322 13:40:10.966940 6 log.go:172] (0xc002c68e60) (5) Data frame handling I0322 13:40:10.967018 6 log.go:172] (0xc000d51810) Data frame received for 3 I0322 13:40:10.967073 6 log.go:172] (0xc0021692c0) (3) Data frame handling I0322 13:40:10.967103 6 log.go:172] (0xc0021692c0) (3) Data frame sent I0322 13:40:10.967129 6 log.go:172] (0xc000d51810) Data frame received for 3 I0322 13:40:10.967142 6 log.go:172] (0xc0021692c0) (3) Data frame handling I0322 13:40:10.968805 6 log.go:172] (0xc000d51810) Data frame received for 1 I0322 13:40:10.968821 6 log.go:172] (0xc001a50960) (1) Data frame handling I0322 13:40:10.968828 6 log.go:172] (0xc001a50960) (1) Data frame sent I0322 13:40:10.968837 6 log.go:172] (0xc000d51810) (0xc001a50960) Stream removed, broadcasting: 1 I0322 13:40:10.968927 6 log.go:172] (0xc000d51810) (0xc001a50960) Stream removed, broadcasting: 1 I0322 13:40:10.968939 6 log.go:172] (0xc000d51810) (0xc0021692c0) Stream removed, broadcasting: 3 I0322 13:40:10.968994 6 log.go:172] (0xc000d51810) Go away received I0322 13:40:10.969092 6 log.go:172] (0xc000d51810) (0xc002c68e60) Stream removed, broadcasting: 5 Mar 22 13:40:10.969: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 22 13:40:10.969: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5107 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 13:40:10.969: INFO: >>> kubeConfig: /root/.kube/config I0322 13:40:10.997660 6 log.go:172] (0xc001fa4210) (0xc002169680) Create stream I0322 13:40:10.997697 6 log.go:172] (0xc001fa4210) (0xc002169680) Stream added, broadcasting: 1 I0322 13:40:11.000420 6 log.go:172] (0xc001fa4210) Reply frame received for 1 I0322 13:40:11.000530 6 log.go:172] (0xc001fa4210) (0xc0010a6320) Create stream I0322 13:40:11.000556 6 log.go:172] (0xc001fa4210) (0xc0010a6320) Stream added, broadcasting: 3 I0322 13:40:11.001679 6 log.go:172] (0xc001fa4210) Reply frame received for 3 I0322 13:40:11.001720 6 log.go:172] (0xc001fa4210) (0xc001a50be0) Create stream I0322 13:40:11.001732 6 log.go:172] (0xc001fa4210) (0xc001a50be0) Stream added, broadcasting: 5 I0322 13:40:11.002585 6 log.go:172] (0xc001fa4210) Reply frame received for 5 I0322 13:40:11.067279 6 log.go:172] (0xc001fa4210) Data frame received for 5 I0322 13:40:11.067312 6 log.go:172] (0xc001a50be0) (5) Data frame handling I0322 13:40:11.067340 6 log.go:172] (0xc001fa4210) Data frame received for 3 I0322 13:40:11.067350 6 log.go:172] (0xc0010a6320) (3) Data frame handling I0322 13:40:11.067357 6 log.go:172] (0xc0010a6320) (3) Data frame sent I0322 13:40:11.067366 6 log.go:172] (0xc001fa4210) Data frame received for 3 I0322 13:40:11.067375 6 log.go:172] (0xc0010a6320) (3) Data frame handling I0322 13:40:11.068741 6 log.go:172] (0xc001fa4210) Data frame received for 1 I0322 13:40:11.068786 6 log.go:172] (0xc002169680) (1) Data frame handling I0322 13:40:11.068817 6 log.go:172] (0xc002169680) (1) Data frame sent I0322 13:40:11.068840 6 log.go:172] (0xc001fa4210) (0xc002169680) Stream removed, broadcasting: 1 I0322 13:40:11.068882 6 log.go:172] (0xc001fa4210) Go away received I0322 13:40:11.068997 6 log.go:172] (0xc001fa4210) (0xc002169680) Stream removed, broadcasting: 1 I0322 13:40:11.069024 6 log.go:172] (0xc001fa4210) (0xc0010a6320) Stream removed, broadcasting: 3 I0322 13:40:11.069039 6 log.go:172] (0xc001fa4210) (0xc001a50be0) Stream removed, broadcasting: 5 Mar 22 13:40:11.069: INFO: Exec stderr: "" Mar 22 13:40:11.069: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5107 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 13:40:11.069: INFO: >>> kubeConfig: /root/.kube/config I0322 13:40:11.103043 6 log.go:172] (0xc00169b8c0) (0xc0010d30e0) Create stream I0322 13:40:11.103075 6 log.go:172] (0xc00169b8c0) (0xc0010d30e0) Stream added, broadcasting: 1 I0322 13:40:11.106115 6 log.go:172] (0xc00169b8c0) Reply frame received for 1 I0322 13:40:11.106160 6 log.go:172] (0xc00169b8c0) (0xc001a50d20) Create stream I0322 13:40:11.106176 6 log.go:172] (0xc00169b8c0) (0xc001a50d20) Stream added, broadcasting: 3 I0322 13:40:11.107137 6 log.go:172] (0xc00169b8c0) Reply frame received for 3 I0322 13:40:11.107174 6 log.go:172] (0xc00169b8c0) (0xc001a50dc0) Create stream I0322 13:40:11.107186 6 log.go:172] (0xc00169b8c0) (0xc001a50dc0) Stream added, broadcasting: 5 I0322 13:40:11.108051 6 log.go:172] (0xc00169b8c0) Reply frame received for 5 I0322 13:40:11.173870 6 log.go:172] (0xc00169b8c0) Data frame received for 5 I0322 13:40:11.173928 6 log.go:172] (0xc001a50dc0) (5) Data frame handling I0322 13:40:11.173967 6 log.go:172] (0xc00169b8c0) Data frame received for 3 I0322 13:40:11.173988 6 log.go:172] (0xc001a50d20) (3) Data frame handling I0322 13:40:11.174011 6 log.go:172] (0xc001a50d20) (3) Data frame sent I0322 13:40:11.174030 6 log.go:172] (0xc00169b8c0) Data frame received for 3 I0322 13:40:11.174048 6 log.go:172] (0xc001a50d20) (3) Data frame handling I0322 13:40:11.175635 6 log.go:172] (0xc00169b8c0) Data frame received for 1 I0322 13:40:11.175681 6 log.go:172] (0xc0010d30e0) (1) Data frame handling I0322 13:40:11.175712 6 log.go:172] (0xc0010d30e0) (1) Data frame sent I0322 13:40:11.175734 6 log.go:172] (0xc00169b8c0) (0xc0010d30e0) Stream removed, broadcasting: 1 I0322 13:40:11.175760 6 log.go:172] (0xc00169b8c0) Go away received I0322 13:40:11.175870 6 log.go:172] (0xc00169b8c0) (0xc0010d30e0) Stream removed, broadcasting: 1 I0322 13:40:11.175895 6 log.go:172] (0xc00169b8c0) (0xc001a50d20) Stream removed, broadcasting: 3 I0322 13:40:11.175909 6 log.go:172] (0xc00169b8c0) (0xc001a50dc0) Stream removed, broadcasting: 5 Mar 22 13:40:11.175: INFO: Exec stderr: "" Mar 22 13:40:11.175: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5107 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 13:40:11.175: INFO: >>> kubeConfig: /root/.kube/config I0322 13:40:11.208819 6 log.go:172] (0xc000e79810) (0xc0010a68c0) Create stream I0322 13:40:11.208851 6 log.go:172] (0xc000e79810) (0xc0010a68c0) Stream added, broadcasting: 1 I0322 13:40:11.212915 6 log.go:172] (0xc000e79810) Reply frame received for 1 I0322 13:40:11.213030 6 log.go:172] (0xc000e79810) (0xc0010d3180) Create stream I0322 13:40:11.213091 6 log.go:172] (0xc000e79810) (0xc0010d3180) Stream added, broadcasting: 3 I0322 13:40:11.215989 6 log.go:172] (0xc000e79810) Reply frame received for 3 I0322 13:40:11.216057 6 log.go:172] (0xc000e79810) (0xc001a50e60) Create stream I0322 13:40:11.216077 6 log.go:172] (0xc000e79810) (0xc001a50e60) Stream added, broadcasting: 5 I0322 13:40:11.217021 6 log.go:172] (0xc000e79810) Reply frame received for 5 I0322 13:40:11.297313 6 log.go:172] (0xc000e79810) Data frame received for 3 I0322 13:40:11.297359 6 log.go:172] (0xc0010d3180) (3) Data frame handling I0322 13:40:11.297412 6 log.go:172] (0xc000e79810) Data frame received for 5 I0322 13:40:11.297473 6 log.go:172] (0xc001a50e60) (5) Data frame handling I0322 13:40:11.297509 6 log.go:172] (0xc0010d3180) (3) Data frame sent I0322 13:40:11.297528 6 log.go:172] (0xc000e79810) Data frame received for 3 I0322 13:40:11.297540 6 log.go:172] (0xc0010d3180) (3) Data frame handling I0322 13:40:11.299406 6 log.go:172] (0xc000e79810) Data frame received for 1 I0322 13:40:11.299441 6 log.go:172] (0xc0010a68c0) (1) Data frame handling I0322 13:40:11.299467 6 log.go:172] (0xc0010a68c0) (1) Data frame sent I0322 13:40:11.299494 6 log.go:172] (0xc000e79810) (0xc0010a68c0) Stream removed, broadcasting: 1 I0322 13:40:11.299522 6 log.go:172] (0xc000e79810) Go away received I0322 13:40:11.299594 6 log.go:172] (0xc000e79810) (0xc0010a68c0) Stream removed, broadcasting: 1 I0322 13:40:11.299607 6 log.go:172] (0xc000e79810) (0xc0010d3180) Stream removed, broadcasting: 3 I0322 13:40:11.299613 6 log.go:172] (0xc000e79810) (0xc001a50e60) Stream removed, broadcasting: 5 Mar 22 13:40:11.299: INFO: Exec stderr: "" Mar 22 13:40:11.299: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5107 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 13:40:11.299: INFO: >>> kubeConfig: /root/.kube/config I0322 13:40:11.337439 6 log.go:172] (0xc0025a0210) (0xc0010a6e60) Create stream I0322 13:40:11.337476 6 log.go:172] (0xc0025a0210) (0xc0010a6e60) Stream added, broadcasting: 1 I0322 13:40:11.339752 6 log.go:172] (0xc0025a0210) Reply frame received for 1 I0322 13:40:11.339802 6 log.go:172] (0xc0025a0210) (0xc0021697c0) Create stream I0322 13:40:11.339819 6 log.go:172] (0xc0025a0210) (0xc0021697c0) Stream added, broadcasting: 3 I0322 13:40:11.340747 6 log.go:172] (0xc0025a0210) Reply frame received for 3 I0322 13:40:11.340789 6 log.go:172] (0xc0025a0210) (0xc001a50f00) Create stream I0322 13:40:11.340806 6 log.go:172] (0xc0025a0210) (0xc001a50f00) Stream added, broadcasting: 5 I0322 13:40:11.341886 6 log.go:172] (0xc0025a0210) Reply frame received for 5 I0322 13:40:11.410993 6 log.go:172] (0xc0025a0210) Data frame received for 5 I0322 13:40:11.411035 6 log.go:172] (0xc001a50f00) (5) Data frame handling I0322 13:40:11.411070 6 log.go:172] (0xc0025a0210) Data frame received for 3 I0322 13:40:11.411084 6 log.go:172] (0xc0021697c0) (3) Data frame handling I0322 13:40:11.411098 6 log.go:172] (0xc0021697c0) (3) Data frame sent I0322 13:40:11.411135 6 log.go:172] (0xc0025a0210) Data frame received for 3 I0322 13:40:11.411146 6 log.go:172] (0xc0021697c0) (3) Data frame handling I0322 13:40:11.413556 6 log.go:172] (0xc0025a0210) Data frame received for 1 I0322 13:40:11.413590 6 log.go:172] (0xc0010a6e60) (1) Data frame handling I0322 13:40:11.413608 6 log.go:172] (0xc0010a6e60) (1) Data frame sent I0322 13:40:11.413626 6 log.go:172] (0xc0025a0210) (0xc0010a6e60) Stream removed, broadcasting: 1 I0322 13:40:11.413646 6 log.go:172] (0xc0025a0210) Go away received I0322 13:40:11.413769 6 log.go:172] (0xc0025a0210) (0xc0010a6e60) Stream removed, broadcasting: 1 I0322 13:40:11.413788 6 log.go:172] (0xc0025a0210) (0xc0021697c0) Stream removed, broadcasting: 3 I0322 13:40:11.413802 6 log.go:172] (0xc0025a0210) (0xc001a50f00) Stream removed, broadcasting: 5 Mar 22 13:40:11.413: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:40:11.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5107" for this suite. Mar 22 13:40:53.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:40:53.514: INFO: namespace e2e-kubelet-etc-hosts-5107 deletion completed in 42.096341485s • [SLOW TEST:53.288 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:40:53.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 13:40:53.588: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.680277ms) Mar 22 13:40:53.592: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.391074ms) Mar 22 13:40:53.595: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.228073ms) Mar 22 13:40:53.598: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.519586ms) Mar 22 13:40:53.602: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.653203ms) Mar 22 13:40:53.606: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.646886ms) Mar 22 13:40:53.609: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.57077ms) Mar 22 13:40:53.613: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.933055ms) Mar 22 13:40:53.617: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.490567ms) Mar 22 13:40:53.620: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.403957ms) Mar 22 13:40:53.624: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.526788ms) Mar 22 13:40:53.627: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.609767ms) Mar 22 13:40:53.631: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.512342ms) Mar 22 13:40:53.635: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.510839ms) Mar 22 13:40:53.638: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.806841ms) Mar 22 13:40:53.642: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.648902ms) Mar 22 13:40:53.657: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 15.364753ms) Mar 22 13:40:53.661: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.653296ms) Mar 22 13:40:53.665: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.641108ms) Mar 22 13:40:53.668: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.520672ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:40:53.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1936" for this suite. Mar 22 13:40:59.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:40:59.759: INFO: namespace proxy-1936 deletion completed in 6.086157511s • [SLOW TEST:6.244 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:40:59.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 22 13:40:59.834: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2be2d135-a186-4355-90e7-4bf64bd2b6ec" in namespace "downward-api-3846" to be "success or failure" Mar 22 13:40:59.861: INFO: Pod "downwardapi-volume-2be2d135-a186-4355-90e7-4bf64bd2b6ec": Phase="Pending", Reason="", readiness=false. Elapsed: 26.933509ms Mar 22 13:41:01.866: INFO: Pod "downwardapi-volume-2be2d135-a186-4355-90e7-4bf64bd2b6ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031481219s Mar 22 13:41:03.870: INFO: Pod "downwardapi-volume-2be2d135-a186-4355-90e7-4bf64bd2b6ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035796966s STEP: Saw pod success Mar 22 13:41:03.870: INFO: Pod "downwardapi-volume-2be2d135-a186-4355-90e7-4bf64bd2b6ec" satisfied condition "success or failure" Mar 22 13:41:03.874: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-2be2d135-a186-4355-90e7-4bf64bd2b6ec container client-container: STEP: delete the pod Mar 22 13:41:03.909: INFO: Waiting for pod downwardapi-volume-2be2d135-a186-4355-90e7-4bf64bd2b6ec to disappear Mar 22 13:41:03.920: INFO: Pod downwardapi-volume-2be2d135-a186-4355-90e7-4bf64bd2b6ec no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:41:03.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3846" for this suite. Mar 22 13:41:09.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:41:10.050: INFO: namespace downward-api-3846 deletion completed in 6.126983024s • [SLOW TEST:10.291 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:41:10.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-3145/configmap-test-b944f4a0-46fc-4c9f-9e10-612f7abbccef STEP: Creating a pod to test consume configMaps Mar 22 13:41:10.149: INFO: Waiting up to 5m0s for pod "pod-configmaps-16411052-ed44-411c-9e92-eb47a6e1ecbf" in namespace "configmap-3145" to be "success or failure" Mar 22 13:41:10.154: INFO: Pod "pod-configmaps-16411052-ed44-411c-9e92-eb47a6e1ecbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350256ms Mar 22 13:41:12.214: INFO: Pod "pod-configmaps-16411052-ed44-411c-9e92-eb47a6e1ecbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064917082s Mar 22 13:41:14.218: INFO: Pod "pod-configmaps-16411052-ed44-411c-9e92-eb47a6e1ecbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069100949s STEP: Saw pod success Mar 22 13:41:14.218: INFO: Pod "pod-configmaps-16411052-ed44-411c-9e92-eb47a6e1ecbf" satisfied condition "success or failure" Mar 22 13:41:14.221: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-16411052-ed44-411c-9e92-eb47a6e1ecbf container env-test: STEP: delete the pod Mar 22 13:41:14.252: INFO: Waiting for pod pod-configmaps-16411052-ed44-411c-9e92-eb47a6e1ecbf to disappear Mar 22 13:41:14.268: INFO: Pod pod-configmaps-16411052-ed44-411c-9e92-eb47a6e1ecbf no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:41:14.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3145" for this suite. Mar 22 13:41:20.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:41:20.363: INFO: namespace configmap-3145 deletion completed in 6.092308301s • [SLOW TEST:10.313 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:41:20.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-ef9880f2-7fd0-4776-bec3-1a698d23e5bc in namespace container-probe-3885 Mar 22 13:41:24.454: INFO: Started pod test-webserver-ef9880f2-7fd0-4776-bec3-1a698d23e5bc in namespace container-probe-3885 STEP: checking the pod's current state and verifying that restartCount is present Mar 22 13:41:24.458: INFO: Initial restart count of pod test-webserver-ef9880f2-7fd0-4776-bec3-1a698d23e5bc is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:45:25.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3885" for this suite. Mar 22 13:45:31.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:45:31.674: INFO: namespace container-probe-3885 deletion completed in 6.119230093s • [SLOW TEST:251.311 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:45:31.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-ce0260c7-824d-4fca-9d5f-c4be6dfbc781 STEP: Creating a pod to test consume secrets Mar 22 13:45:31.742: INFO: Waiting up to 5m0s for pod "pod-secrets-d777ff3b-c3e5-4ea2-a9c7-c981d2126fd9" in namespace "secrets-8312" to be "success or failure" Mar 22 13:45:31.746: INFO: Pod "pod-secrets-d777ff3b-c3e5-4ea2-a9c7-c981d2126fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054009ms Mar 22 13:45:33.750: INFO: Pod "pod-secrets-d777ff3b-c3e5-4ea2-a9c7-c981d2126fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007698599s Mar 22 13:45:35.754: INFO: Pod "pod-secrets-d777ff3b-c3e5-4ea2-a9c7-c981d2126fd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012112378s STEP: Saw pod success Mar 22 13:45:35.754: INFO: Pod "pod-secrets-d777ff3b-c3e5-4ea2-a9c7-c981d2126fd9" satisfied condition "success or failure" Mar 22 13:45:35.757: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-d777ff3b-c3e5-4ea2-a9c7-c981d2126fd9 container secret-volume-test: STEP: delete the pod Mar 22 13:45:35.777: INFO: Waiting for pod pod-secrets-d777ff3b-c3e5-4ea2-a9c7-c981d2126fd9 to disappear Mar 22 13:45:35.793: INFO: Pod pod-secrets-d777ff3b-c3e5-4ea2-a9c7-c981d2126fd9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:45:35.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8312" for this suite. Mar 22 13:45:41.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:45:41.890: INFO: namespace secrets-8312 deletion completed in 6.094067339s • [SLOW TEST:10.216 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:45:41.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 22 13:45:41.953: INFO: Waiting up to 5m0s for pod "downwardapi-volume-979baeed-418a-4828-9f86-42126f5e42bf" in namespace "downward-api-600" to be "success or failure" Mar 22 13:45:41.957: INFO: Pod "downwardapi-volume-979baeed-418a-4828-9f86-42126f5e42bf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.395525ms Mar 22 13:45:43.962: INFO: Pod "downwardapi-volume-979baeed-418a-4828-9f86-42126f5e42bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008752756s Mar 22 13:45:45.967: INFO: Pod "downwardapi-volume-979baeed-418a-4828-9f86-42126f5e42bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013068339s STEP: Saw pod success Mar 22 13:45:45.967: INFO: Pod "downwardapi-volume-979baeed-418a-4828-9f86-42126f5e42bf" satisfied condition "success or failure" Mar 22 13:45:45.970: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-979baeed-418a-4828-9f86-42126f5e42bf container client-container: STEP: delete the pod Mar 22 13:45:46.001: INFO: Waiting for pod downwardapi-volume-979baeed-418a-4828-9f86-42126f5e42bf to disappear Mar 22 13:45:46.016: INFO: Pod downwardapi-volume-979baeed-418a-4828-9f86-42126f5e42bf no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:45:46.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-600" for this suite. Mar 22 13:45:52.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:45:52.134: INFO: namespace downward-api-600 deletion completed in 6.114536324s • [SLOW TEST:10.243 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:45:52.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:45:56.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3798" for this suite. Mar 22 13:46:46.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:46:46.332: INFO: namespace kubelet-test-3798 deletion completed in 50.08830823s • [SLOW TEST:54.197 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:46:46.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 13:46:46.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4091' Mar 22 13:46:49.843: INFO: stderr: "" Mar 22 13:46:49.843: INFO: stdout: "replicationcontroller/redis-master created\n" Mar 22 13:46:49.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4091' Mar 22 13:46:50.130: INFO: stderr: "" Mar 22 13:46:50.130: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Mar 22 13:46:51.135: INFO: Selector matched 1 pods for map[app:redis] Mar 22 13:46:51.135: INFO: Found 0 / 1 Mar 22 13:46:52.153: INFO: Selector matched 1 pods for map[app:redis] Mar 22 13:46:52.153: INFO: Found 0 / 1 Mar 22 13:46:53.135: INFO: Selector matched 1 pods for map[app:redis] Mar 22 13:46:53.135: INFO: Found 1 / 1 Mar 22 13:46:53.135: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 22 13:46:53.138: INFO: Selector matched 1 pods for map[app:redis] Mar 22 13:46:53.138: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 22 13:46:53.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-njtjr --namespace=kubectl-4091' Mar 22 13:46:53.255: INFO: stderr: "" Mar 22 13:46:53.255: INFO: stdout: "Name: redis-master-njtjr\nNamespace: kubectl-4091\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Sun, 22 Mar 2020 13:46:49 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.100\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://732aef47b2384aadfb89cf2835300af2afda26c4510ddab41cdef297e2007a11\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 22 Mar 2020 13:46:52 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-q9nzv (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-q9nzv:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-q9nzv\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-4091/redis-master-njtjr to iruya-worker\n Normal Pulled 2s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker Created container redis-master\n Normal Started 1s kubelet, iruya-worker Started container redis-master\n" Mar 22 13:46:53.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-4091' Mar 22 13:46:53.369: INFO: stderr: "" Mar 22 13:46:53.369: INFO: stdout: "Name: redis-master\nNamespace: kubectl-4091\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-njtjr\n" Mar 22 13:46:53.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-4091' Mar 22 13:46:53.470: INFO: stderr: "" Mar 22 13:46:53.470: INFO: stdout: "Name: redis-master\nNamespace: kubectl-4091\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.98.143.152\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.100:6379\nSession Affinity: None\nEvents: \n" Mar 22 13:46:53.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Mar 22 13:46:53.588: INFO: stderr: "" Mar 22 13:46:53.588: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 22 Mar 2020 13:46:48 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 22 Mar 2020 13:46:48 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 22 Mar 2020 13:46:48 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 22 Mar 2020 13:46:48 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d19h\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 6d19h\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 6d19h\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 6d19h\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d19h\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 6d19h\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d19h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 22 13:46:53.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4091' Mar 22 13:46:53.689: INFO: stderr: "" Mar 22 13:46:53.689: INFO: stdout: "Name: kubectl-4091\nLabels: e2e-framework=kubectl\n e2e-run=e0a4a0b4-537d-4a6a-87cf-96f70cc2f47e\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:46:53.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4091" for this suite. Mar 22 13:47:15.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:47:15.777: INFO: namespace kubectl-4091 deletion completed in 22.085069269s • [SLOW TEST:29.444 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:47:15.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-4c610e11-0b17-4af3-b7d6-538f3a821f9b STEP: Creating a pod to test consume secrets Mar 22 13:47:15.865: INFO: Waiting up to 5m0s for pod "pod-secrets-1f792e01-9c62-4800-af73-dd5afe9ef0a9" in namespace "secrets-6060" to be "success or failure" Mar 22 13:47:15.868: INFO: Pod "pod-secrets-1f792e01-9c62-4800-af73-dd5afe9ef0a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.427773ms Mar 22 13:47:17.872: INFO: Pod "pod-secrets-1f792e01-9c62-4800-af73-dd5afe9ef0a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007112133s Mar 22 13:47:19.877: INFO: Pod "pod-secrets-1f792e01-9c62-4800-af73-dd5afe9ef0a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011890049s STEP: Saw pod success Mar 22 13:47:19.877: INFO: Pod "pod-secrets-1f792e01-9c62-4800-af73-dd5afe9ef0a9" satisfied condition "success or failure" Mar 22 13:47:19.880: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-1f792e01-9c62-4800-af73-dd5afe9ef0a9 container secret-env-test: STEP: delete the pod Mar 22 13:47:19.977: INFO: Waiting for pod pod-secrets-1f792e01-9c62-4800-af73-dd5afe9ef0a9 to disappear Mar 22 13:47:20.088: INFO: Pod pod-secrets-1f792e01-9c62-4800-af73-dd5afe9ef0a9 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:47:20.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6060" for this suite. Mar 22 13:47:26.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:47:26.227: INFO: namespace secrets-6060 deletion completed in 6.134807307s • [SLOW TEST:10.449 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:47:26.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 13:47:26.284: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:47:30.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9855" for this suite. Mar 22 13:48:16.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:48:16.620: INFO: namespace pods-9855 deletion completed in 46.120754183s • [SLOW TEST:50.393 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:48:16.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 22 13:48:20.703: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:48:20.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4149" for this suite. Mar 22 13:48:26.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:48:26.822: INFO: namespace container-runtime-4149 deletion completed in 6.088979404s • [SLOW TEST:10.202 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:48:26.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-78.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-78.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-78.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-78.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-78.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-78.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 22 13:48:32.936: INFO: DNS probes using dns-78/dns-test-3396d9b5-8a42-4bed-9ff7-1e034997e74a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:48:32.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-78" for this suite. Mar 22 13:48:39.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:48:39.163: INFO: namespace dns-78 deletion completed in 6.191367545s • [SLOW TEST:12.340 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:48:39.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 22 13:48:39.219: INFO: Waiting up to 5m0s for pod "downwardapi-volume-253768f5-5cf3-48fd-9d9e-4b3e3e0ee900" in namespace "projected-9040" to be "success or failure" Mar 22 13:48:39.223: INFO: Pod "downwardapi-volume-253768f5-5cf3-48fd-9d9e-4b3e3e0ee900": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176495ms Mar 22 13:48:41.227: INFO: Pod "downwardapi-volume-253768f5-5cf3-48fd-9d9e-4b3e3e0ee900": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007833072s Mar 22 13:48:43.231: INFO: Pod "downwardapi-volume-253768f5-5cf3-48fd-9d9e-4b3e3e0ee900": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012021493s STEP: Saw pod success Mar 22 13:48:43.231: INFO: Pod "downwardapi-volume-253768f5-5cf3-48fd-9d9e-4b3e3e0ee900" satisfied condition "success or failure" Mar 22 13:48:43.234: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-253768f5-5cf3-48fd-9d9e-4b3e3e0ee900 container client-container: STEP: delete the pod Mar 22 13:48:43.254: INFO: Waiting for pod downwardapi-volume-253768f5-5cf3-48fd-9d9e-4b3e3e0ee900 to disappear Mar 22 13:48:43.258: INFO: Pod downwardapi-volume-253768f5-5cf3-48fd-9d9e-4b3e3e0ee900 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:48:43.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9040" for this suite. Mar 22 13:48:49.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:48:49.376: INFO: namespace projected-9040 deletion completed in 6.095128335s • [SLOW TEST:10.212 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:48:49.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-5wc5 STEP: Creating a pod to test atomic-volume-subpath Mar 22 13:48:49.440: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-5wc5" in namespace "subpath-2266" to be "success or failure" Mar 22 13:48:49.444: INFO: Pod "pod-subpath-test-secret-5wc5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.843169ms Mar 22 13:48:51.448: INFO: Pod "pod-subpath-test-secret-5wc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007823771s Mar 22 13:48:53.452: INFO: Pod "pod-subpath-test-secret-5wc5": Phase="Running", Reason="", readiness=true. Elapsed: 4.01198049s Mar 22 13:48:55.457: INFO: Pod "pod-subpath-test-secret-5wc5": Phase="Running", Reason="", readiness=true. Elapsed: 6.01685024s Mar 22 13:48:57.460: INFO: Pod "pod-subpath-test-secret-5wc5": Phase="Running", Reason="", readiness=true. Elapsed: 8.020231072s Mar 22 13:48:59.465: INFO: Pod "pod-subpath-test-secret-5wc5": Phase="Running", Reason="", readiness=true. Elapsed: 10.024899026s Mar 22 13:49:01.469: INFO: Pod "pod-subpath-test-secret-5wc5": Phase="Running", Reason="", readiness=true. Elapsed: 12.029419721s Mar 22 13:49:03.474: INFO: Pod "pod-subpath-test-secret-5wc5": Phase="Running", Reason="", readiness=true. Elapsed: 14.034201858s Mar 22 13:49:05.478: INFO: Pod "pod-subpath-test-secret-5wc5": Phase="Running", Reason="", readiness=true. Elapsed: 16.038579785s Mar 22 13:49:07.486: INFO: Pod "pod-subpath-test-secret-5wc5": Phase="Running", Reason="", readiness=true. Elapsed: 18.046536217s Mar 22 13:49:09.491: INFO: Pod "pod-subpath-test-secret-5wc5": Phase="Running", Reason="", readiness=true. Elapsed: 20.05088595s Mar 22 13:49:11.494: INFO: Pod "pod-subpath-test-secret-5wc5": Phase="Running", Reason="", readiness=true. Elapsed: 22.054430885s Mar 22 13:49:13.499: INFO: Pod "pod-subpath-test-secret-5wc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.058814961s STEP: Saw pod success Mar 22 13:49:13.499: INFO: Pod "pod-subpath-test-secret-5wc5" satisfied condition "success or failure" Mar 22 13:49:13.502: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-5wc5 container test-container-subpath-secret-5wc5: STEP: delete the pod Mar 22 13:49:13.518: INFO: Waiting for pod pod-subpath-test-secret-5wc5 to disappear Mar 22 13:49:13.522: INFO: Pod pod-subpath-test-secret-5wc5 no longer exists STEP: Deleting pod pod-subpath-test-secret-5wc5 Mar 22 13:49:13.522: INFO: Deleting pod "pod-subpath-test-secret-5wc5" in namespace "subpath-2266" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:49:13.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2266" for this suite. Mar 22 13:49:19.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:49:19.639: INFO: namespace subpath-2266 deletion completed in 6.111834979s • [SLOW TEST:30.263 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:49:19.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-bfa8ec97-e68f-45e9-84c1-ebc7d9ca2a9e STEP: Creating a pod to test consume secrets Mar 22 13:49:19.759: INFO: Waiting up to 5m0s for pod "pod-secrets-9f542851-f4cf-495b-a89b-751c7203d7b4" in namespace "secrets-5275" to be "success or failure" Mar 22 13:49:19.763: INFO: Pod "pod-secrets-9f542851-f4cf-495b-a89b-751c7203d7b4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.528109ms Mar 22 13:49:21.772: INFO: Pod "pod-secrets-9f542851-f4cf-495b-a89b-751c7203d7b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013017721s Mar 22 13:49:23.776: INFO: Pod "pod-secrets-9f542851-f4cf-495b-a89b-751c7203d7b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017157688s STEP: Saw pod success Mar 22 13:49:23.776: INFO: Pod "pod-secrets-9f542851-f4cf-495b-a89b-751c7203d7b4" satisfied condition "success or failure" Mar 22 13:49:23.780: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-9f542851-f4cf-495b-a89b-751c7203d7b4 container secret-volume-test: STEP: delete the pod Mar 22 13:49:23.839: INFO: Waiting for pod pod-secrets-9f542851-f4cf-495b-a89b-751c7203d7b4 to disappear Mar 22 13:49:23.847: INFO: Pod pod-secrets-9f542851-f4cf-495b-a89b-751c7203d7b4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:49:23.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5275" for this suite. Mar 22 13:49:29.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:49:29.944: INFO: namespace secrets-5275 deletion completed in 6.093937012s • [SLOW TEST:10.305 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:49:29.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 22 13:49:30.006: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd39ba87-663f-4111-8dfd-ff648b99a58e" in namespace "projected-2762" to be "success or failure" Mar 22 13:49:30.017: INFO: Pod "downwardapi-volume-cd39ba87-663f-4111-8dfd-ff648b99a58e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.270826ms Mar 22 13:49:32.022: INFO: Pod "downwardapi-volume-cd39ba87-663f-4111-8dfd-ff648b99a58e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015694839s Mar 22 13:49:34.026: INFO: Pod "downwardapi-volume-cd39ba87-663f-4111-8dfd-ff648b99a58e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02004201s STEP: Saw pod success Mar 22 13:49:34.026: INFO: Pod "downwardapi-volume-cd39ba87-663f-4111-8dfd-ff648b99a58e" satisfied condition "success or failure" Mar 22 13:49:34.029: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-cd39ba87-663f-4111-8dfd-ff648b99a58e container client-container: STEP: delete the pod Mar 22 13:49:34.046: INFO: Waiting for pod downwardapi-volume-cd39ba87-663f-4111-8dfd-ff648b99a58e to disappear Mar 22 13:49:34.050: INFO: Pod downwardapi-volume-cd39ba87-663f-4111-8dfd-ff648b99a58e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:49:34.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2762" for this suite. Mar 22 13:49:40.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:49:40.146: INFO: namespace projected-2762 deletion completed in 6.093240262s • [SLOW TEST:10.200 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:49:40.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 13:49:40.191: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:49:44.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3516" for this suite. Mar 22 13:50:34.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:50:34.357: INFO: namespace pods-3516 deletion completed in 50.091901037s • [SLOW TEST:54.211 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:50:34.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 22 13:50:34.453: INFO: Waiting up to 5m0s for pod "pod-58f9ae4c-09e1-461a-88b8-aebb6852045c" in namespace "emptydir-4865" to be "success or failure" Mar 22 13:50:34.459: INFO: Pod "pod-58f9ae4c-09e1-461a-88b8-aebb6852045c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.700648ms Mar 22 13:50:36.463: INFO: Pod "pod-58f9ae4c-09e1-461a-88b8-aebb6852045c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009857036s Mar 22 13:50:38.467: INFO: Pod "pod-58f9ae4c-09e1-461a-88b8-aebb6852045c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01401887s STEP: Saw pod success Mar 22 13:50:38.467: INFO: Pod "pod-58f9ae4c-09e1-461a-88b8-aebb6852045c" satisfied condition "success or failure" Mar 22 13:50:38.470: INFO: Trying to get logs from node iruya-worker pod pod-58f9ae4c-09e1-461a-88b8-aebb6852045c container test-container: STEP: delete the pod Mar 22 13:50:38.489: INFO: Waiting for pod pod-58f9ae4c-09e1-461a-88b8-aebb6852045c to disappear Mar 22 13:50:38.493: INFO: Pod pod-58f9ae4c-09e1-461a-88b8-aebb6852045c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:50:38.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4865" for this suite. Mar 22 13:50:44.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:50:44.592: INFO: namespace emptydir-4865 deletion completed in 6.094624429s • [SLOW TEST:10.234 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:50:44.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-0173c633-5ab0-4491-a8f0-2eacf4aa4c93 STEP: Creating a pod to test consume configMaps Mar 22 13:50:44.663: INFO: Waiting up to 5m0s for pod "pod-configmaps-2357610c-0834-4d52-be1f-96c6f26a433f" in namespace "configmap-2319" to be "success or failure" Mar 22 13:50:44.667: INFO: Pod "pod-configmaps-2357610c-0834-4d52-be1f-96c6f26a433f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009245ms Mar 22 13:50:46.671: INFO: Pod "pod-configmaps-2357610c-0834-4d52-be1f-96c6f26a433f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007727278s Mar 22 13:50:48.675: INFO: Pod "pod-configmaps-2357610c-0834-4d52-be1f-96c6f26a433f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011949071s STEP: Saw pod success Mar 22 13:50:48.675: INFO: Pod "pod-configmaps-2357610c-0834-4d52-be1f-96c6f26a433f" satisfied condition "success or failure" Mar 22 13:50:48.678: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-2357610c-0834-4d52-be1f-96c6f26a433f container configmap-volume-test: STEP: delete the pod Mar 22 13:50:48.713: INFO: Waiting for pod pod-configmaps-2357610c-0834-4d52-be1f-96c6f26a433f to disappear Mar 22 13:50:48.727: INFO: Pod pod-configmaps-2357610c-0834-4d52-be1f-96c6f26a433f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:50:48.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2319" for this suite. Mar 22 13:50:54.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:50:54.819: INFO: namespace configmap-2319 deletion completed in 6.088172444s • [SLOW TEST:10.226 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:50:54.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-5f8f STEP: Creating a pod to test atomic-volume-subpath Mar 22 13:50:54.903: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5f8f" in namespace "subpath-734" to be "success or failure" Mar 22 13:50:54.923: INFO: Pod "pod-subpath-test-configmap-5f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.159096ms Mar 22 13:50:56.926: INFO: Pod "pod-subpath-test-configmap-5f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022715747s Mar 22 13:50:58.930: INFO: Pod "pod-subpath-test-configmap-5f8f": Phase="Running", Reason="", readiness=true. Elapsed: 4.026821364s Mar 22 13:51:00.935: INFO: Pod "pod-subpath-test-configmap-5f8f": Phase="Running", Reason="", readiness=true. Elapsed: 6.031170051s Mar 22 13:51:02.940: INFO: Pod "pod-subpath-test-configmap-5f8f": Phase="Running", Reason="", readiness=true. Elapsed: 8.036003329s Mar 22 13:51:04.944: INFO: Pod "pod-subpath-test-configmap-5f8f": Phase="Running", Reason="", readiness=true. Elapsed: 10.040462088s Mar 22 13:51:06.949: INFO: Pod "pod-subpath-test-configmap-5f8f": Phase="Running", Reason="", readiness=true. Elapsed: 12.045475854s Mar 22 13:51:08.954: INFO: Pod "pod-subpath-test-configmap-5f8f": Phase="Running", Reason="", readiness=true. Elapsed: 14.050037862s Mar 22 13:51:10.959: INFO: Pod "pod-subpath-test-configmap-5f8f": Phase="Running", Reason="", readiness=true. Elapsed: 16.055181895s Mar 22 13:51:12.962: INFO: Pod "pod-subpath-test-configmap-5f8f": Phase="Running", Reason="", readiness=true. Elapsed: 18.05892174s Mar 22 13:51:14.967: INFO: Pod "pod-subpath-test-configmap-5f8f": Phase="Running", Reason="", readiness=true. Elapsed: 20.063148658s Mar 22 13:51:16.971: INFO: Pod "pod-subpath-test-configmap-5f8f": Phase="Running", Reason="", readiness=true. Elapsed: 22.067473879s Mar 22 13:51:18.975: INFO: Pod "pod-subpath-test-configmap-5f8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.071586162s STEP: Saw pod success Mar 22 13:51:18.975: INFO: Pod "pod-subpath-test-configmap-5f8f" satisfied condition "success or failure" Mar 22 13:51:18.978: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-5f8f container test-container-subpath-configmap-5f8f: STEP: delete the pod Mar 22 13:51:19.011: INFO: Waiting for pod pod-subpath-test-configmap-5f8f to disappear Mar 22 13:51:19.043: INFO: Pod pod-subpath-test-configmap-5f8f no longer exists STEP: Deleting pod pod-subpath-test-configmap-5f8f Mar 22 13:51:19.043: INFO: Deleting pod "pod-subpath-test-configmap-5f8f" in namespace "subpath-734" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:51:19.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-734" for this suite. Mar 22 13:51:25.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:51:25.188: INFO: namespace subpath-734 deletion completed in 6.140000176s • [SLOW TEST:30.368 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:51:25.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-1997 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1997 to expose endpoints map[] Mar 22 13:51:25.287: INFO: Get endpoints failed (3.826509ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 22 13:51:26.648: INFO: successfully validated that service endpoint-test2 in namespace services-1997 exposes endpoints map[] (1.365214452s elapsed) STEP: Creating pod pod1 in namespace services-1997 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1997 to expose endpoints map[pod1:[80]] Mar 22 13:51:29.791: INFO: successfully validated that service endpoint-test2 in namespace services-1997 exposes endpoints map[pod1:[80]] (3.136864523s elapsed) STEP: Creating pod pod2 in namespace services-1997 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1997 to expose endpoints map[pod1:[80] pod2:[80]] Mar 22 13:51:32.854: INFO: successfully validated that service endpoint-test2 in namespace services-1997 exposes endpoints map[pod1:[80] pod2:[80]] (3.059685655s elapsed) STEP: Deleting pod pod1 in namespace services-1997 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1997 to expose endpoints map[pod2:[80]] Mar 22 13:51:33.880: INFO: successfully validated that service endpoint-test2 in namespace services-1997 exposes endpoints map[pod2:[80]] (1.021370629s elapsed) STEP: Deleting pod pod2 in namespace services-1997 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1997 to expose endpoints map[] Mar 22 13:51:34.895: INFO: successfully validated that service endpoint-test2 in namespace services-1997 exposes endpoints map[] (1.009427987s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:51:34.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1997" for this suite. Mar 22 13:51:56.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:51:57.017: INFO: namespace services-1997 deletion completed in 22.089683051s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:31.830 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:51:57.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 22 13:52:01.639: INFO: Successfully updated pod "labelsupdate3bb1e287-cb2d-4926-9988-9bcfb806037a" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:52:03.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2710" for this suite. Mar 22 13:52:25.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:52:25.753: INFO: namespace projected-2710 deletion completed in 22.093121648s • [SLOW TEST:28.735 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:52:25.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 22 13:52:25.811: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2e10ed9-480e-41ab-b031-02948b291ff8" in namespace "projected-4771" to be "success or failure" Mar 22 13:52:25.814: INFO: Pod "downwardapi-volume-b2e10ed9-480e-41ab-b031-02948b291ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.914062ms Mar 22 13:52:27.824: INFO: Pod "downwardapi-volume-b2e10ed9-480e-41ab-b031-02948b291ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012655343s Mar 22 13:52:29.829: INFO: Pod "downwardapi-volume-b2e10ed9-480e-41ab-b031-02948b291ff8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017439765s STEP: Saw pod success Mar 22 13:52:29.829: INFO: Pod "downwardapi-volume-b2e10ed9-480e-41ab-b031-02948b291ff8" satisfied condition "success or failure" Mar 22 13:52:29.833: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b2e10ed9-480e-41ab-b031-02948b291ff8 container client-container: STEP: delete the pod Mar 22 13:52:29.877: INFO: Waiting for pod downwardapi-volume-b2e10ed9-480e-41ab-b031-02948b291ff8 to disappear Mar 22 13:52:29.905: INFO: Pod downwardapi-volume-b2e10ed9-480e-41ab-b031-02948b291ff8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:52:29.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4771" for this suite. Mar 22 13:52:35.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:52:36.054: INFO: namespace projected-4771 deletion completed in 6.145802797s • [SLOW TEST:10.301 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:52:36.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-19b356fe-b71a-4315-a3fd-cc6ed71c9a7f Mar 22 13:52:36.137: INFO: Pod name my-hostname-basic-19b356fe-b71a-4315-a3fd-cc6ed71c9a7f: Found 0 pods out of 1 Mar 22 13:52:41.142: INFO: Pod name my-hostname-basic-19b356fe-b71a-4315-a3fd-cc6ed71c9a7f: Found 1 pods out of 1 Mar 22 13:52:41.142: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-19b356fe-b71a-4315-a3fd-cc6ed71c9a7f" are running Mar 22 13:52:41.146: INFO: Pod "my-hostname-basic-19b356fe-b71a-4315-a3fd-cc6ed71c9a7f-smrz7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-22 13:52:36 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-22 13:52:39 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-22 13:52:39 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-22 13:52:36 +0000 UTC Reason: Message:}]) Mar 22 13:52:41.146: INFO: Trying to dial the pod Mar 22 13:52:46.157: INFO: Controller my-hostname-basic-19b356fe-b71a-4315-a3fd-cc6ed71c9a7f: Got expected result from replica 1 [my-hostname-basic-19b356fe-b71a-4315-a3fd-cc6ed71c9a7f-smrz7]: "my-hostname-basic-19b356fe-b71a-4315-a3fd-cc6ed71c9a7f-smrz7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:52:46.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-383" for this suite. Mar 22 13:52:52.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:52:52.256: INFO: namespace replication-controller-383 deletion completed in 6.095297444s • [SLOW TEST:16.202 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:52:52.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 13:52:52.342: INFO: Creating ReplicaSet my-hostname-basic-a835dbfa-599a-46d0-8887-cbb985842e61 Mar 22 13:52:52.365: INFO: Pod name my-hostname-basic-a835dbfa-599a-46d0-8887-cbb985842e61: Found 0 pods out of 1 Mar 22 13:52:57.370: INFO: Pod name my-hostname-basic-a835dbfa-599a-46d0-8887-cbb985842e61: Found 1 pods out of 1 Mar 22 13:52:57.370: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-a835dbfa-599a-46d0-8887-cbb985842e61" is running Mar 22 13:52:57.372: INFO: Pod "my-hostname-basic-a835dbfa-599a-46d0-8887-cbb985842e61-c86nl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-22 13:52:52 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-22 13:52:55 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-22 13:52:55 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-22 13:52:52 +0000 UTC Reason: Message:}]) Mar 22 13:52:57.372: INFO: Trying to dial the pod Mar 22 13:53:02.384: INFO: Controller my-hostname-basic-a835dbfa-599a-46d0-8887-cbb985842e61: Got expected result from replica 1 [my-hostname-basic-a835dbfa-599a-46d0-8887-cbb985842e61-c86nl]: "my-hostname-basic-a835dbfa-599a-46d0-8887-cbb985842e61-c86nl", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:53:02.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-908" for this suite. Mar 22 13:53:08.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:53:08.485: INFO: namespace replicaset-908 deletion completed in 6.096347102s • [SLOW TEST:16.229 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:53:08.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-f85e7196-a74c-4215-8936-730d9f4b85bf STEP: Creating a pod to test consume configMaps Mar 22 13:53:08.561: INFO: Waiting up to 5m0s for pod "pod-configmaps-6d7cb80e-67f9-4953-82a9-6a9f189cfbf6" in namespace "configmap-5244" to be "success or failure" Mar 22 13:53:08.576: INFO: Pod "pod-configmaps-6d7cb80e-67f9-4953-82a9-6a9f189cfbf6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.992098ms Mar 22 13:53:10.581: INFO: Pod "pod-configmaps-6d7cb80e-67f9-4953-82a9-6a9f189cfbf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019855552s Mar 22 13:53:12.585: INFO: Pod "pod-configmaps-6d7cb80e-67f9-4953-82a9-6a9f189cfbf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02400751s STEP: Saw pod success Mar 22 13:53:12.585: INFO: Pod "pod-configmaps-6d7cb80e-67f9-4953-82a9-6a9f189cfbf6" satisfied condition "success or failure" Mar 22 13:53:12.588: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-6d7cb80e-67f9-4953-82a9-6a9f189cfbf6 container configmap-volume-test: STEP: delete the pod Mar 22 13:53:12.663: INFO: Waiting for pod pod-configmaps-6d7cb80e-67f9-4953-82a9-6a9f189cfbf6 to disappear Mar 22 13:53:12.684: INFO: Pod pod-configmaps-6d7cb80e-67f9-4953-82a9-6a9f189cfbf6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:53:12.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5244" for this suite. Mar 22 13:53:18.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:53:18.784: INFO: namespace configmap-5244 deletion completed in 6.097303968s • [SLOW TEST:10.299 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:53:18.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-6dfed5fd-64dc-4ce8-a135-d8df45dbaf94 STEP: Creating configMap with name cm-test-opt-upd-80b2d3da-f92c-4533-9bfa-11ff6d91876e STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6dfed5fd-64dc-4ce8-a135-d8df45dbaf94 STEP: Updating configmap cm-test-opt-upd-80b2d3da-f92c-4533-9bfa-11ff6d91876e STEP: Creating configMap with name cm-test-opt-create-5898264e-4239-46e8-8590-4e5c7c969902 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:54:47.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2370" for this suite. Mar 22 13:55:09.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:55:09.440: INFO: namespace configmap-2370 deletion completed in 22.10923595s • [SLOW TEST:110.655 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:55:09.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 22 13:55:14.058: INFO: Successfully updated pod "labelsupdate6f18dddc-03af-49d8-a0d5-06a40d3420e1" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:55:16.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6194" for this suite. Mar 22 13:55:38.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:55:38.161: INFO: namespace downward-api-6194 deletion completed in 22.083092132s • [SLOW TEST:28.721 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:55:38.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 22 13:55:38.241: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 22 13:55:43.246: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:55:44.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3297" for this suite. Mar 22 13:55:50.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:55:50.417: INFO: namespace replication-controller-3297 deletion completed in 6.151743636s • [SLOW TEST:12.255 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:55:50.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-5647dae9-a0a1-4a8d-b413-8b4661ceab9e STEP: Creating the pod STEP: Updating configmap configmap-test-upd-5647dae9-a0a1-4a8d-b413-8b4661ceab9e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:55:56.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2367" for this suite. Mar 22 13:56:18.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:56:18.769: INFO: namespace configmap-2367 deletion completed in 22.098186934s • [SLOW TEST:28.351 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:56:18.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 22 13:56:18.810: INFO: Waiting up to 5m0s for pod "pod-f8619478-fd7d-4bcb-9d1d-4a381e4c5237" in namespace "emptydir-986" to be "success or failure" Mar 22 13:56:18.862: INFO: Pod "pod-f8619478-fd7d-4bcb-9d1d-4a381e4c5237": Phase="Pending", Reason="", readiness=false. Elapsed: 51.787649ms Mar 22 13:56:21.108: INFO: Pod "pod-f8619478-fd7d-4bcb-9d1d-4a381e4c5237": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297575191s Mar 22 13:56:23.111: INFO: Pod "pod-f8619478-fd7d-4bcb-9d1d-4a381e4c5237": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.301270058s STEP: Saw pod success Mar 22 13:56:23.111: INFO: Pod "pod-f8619478-fd7d-4bcb-9d1d-4a381e4c5237" satisfied condition "success or failure" Mar 22 13:56:23.114: INFO: Trying to get logs from node iruya-worker2 pod pod-f8619478-fd7d-4bcb-9d1d-4a381e4c5237 container test-container: STEP: delete the pod Mar 22 13:56:23.158: INFO: Waiting for pod pod-f8619478-fd7d-4bcb-9d1d-4a381e4c5237 to disappear Mar 22 13:56:23.162: INFO: Pod pod-f8619478-fd7d-4bcb-9d1d-4a381e4c5237 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:56:23.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-986" for this suite. Mar 22 13:56:29.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:56:29.270: INFO: namespace emptydir-986 deletion completed in 6.104130128s • [SLOW TEST:10.500 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:56:29.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8250 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 22 13:56:29.322: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 22 13:56:53.421: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.112:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8250 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 13:56:53.421: INFO: >>> kubeConfig: /root/.kube/config I0322 13:56:53.474909 6 log.go:172] (0xc0021cd130) (0xc001a50d20) Create stream I0322 13:56:53.474952 6 log.go:172] (0xc0021cd130) (0xc001a50d20) Stream added, broadcasting: 1 I0322 13:56:53.477060 6 log.go:172] (0xc0021cd130) Reply frame received for 1 I0322 13:56:53.477097 6 log.go:172] (0xc0021cd130) (0xc002168780) Create stream I0322 13:56:53.477106 6 log.go:172] (0xc0021cd130) (0xc002168780) Stream added, broadcasting: 3 I0322 13:56:53.478126 6 log.go:172] (0xc0021cd130) Reply frame received for 3 I0322 13:56:53.478174 6 log.go:172] (0xc0021cd130) (0xc0027ccfa0) Create stream I0322 13:56:53.478185 6 log.go:172] (0xc0021cd130) (0xc0027ccfa0) Stream added, broadcasting: 5 I0322 13:56:53.478952 6 log.go:172] (0xc0021cd130) Reply frame received for 5 I0322 13:56:53.556480 6 log.go:172] (0xc0021cd130) Data frame received for 3 I0322 13:56:53.556512 6 log.go:172] (0xc002168780) (3) Data frame handling I0322 13:56:53.556528 6 log.go:172] (0xc002168780) (3) Data frame sent I0322 13:56:53.556549 6 log.go:172] (0xc0021cd130) Data frame received for 3 I0322 13:56:53.556561 6 log.go:172] (0xc002168780) (3) Data frame handling I0322 13:56:53.556570 6 log.go:172] (0xc0021cd130) Data frame received for 5 I0322 13:56:53.556576 6 log.go:172] (0xc0027ccfa0) (5) Data frame handling I0322 13:56:53.558343 6 log.go:172] (0xc0021cd130) Data frame received for 1 I0322 13:56:53.558365 6 log.go:172] (0xc001a50d20) (1) Data frame handling I0322 13:56:53.558378 6 log.go:172] (0xc001a50d20) (1) Data frame sent I0322 13:56:53.558390 6 log.go:172] (0xc0021cd130) (0xc001a50d20) Stream removed, broadcasting: 1 I0322 13:56:53.558404 6 log.go:172] (0xc0021cd130) Go away received I0322 13:56:53.558556 6 log.go:172] (0xc0021cd130) (0xc001a50d20) Stream removed, broadcasting: 1 I0322 13:56:53.558589 6 log.go:172] (0xc0021cd130) (0xc002168780) Stream removed, broadcasting: 3 I0322 13:56:53.558616 6 log.go:172] (0xc0021cd130) (0xc0027ccfa0) Stream removed, broadcasting: 5 Mar 22 13:56:53.558: INFO: Found all expected endpoints: [netserver-0] Mar 22 13:56:53.562: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.6:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8250 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 13:56:53.562: INFO: >>> kubeConfig: /root/.kube/config I0322 13:56:53.596465 6 log.go:172] (0xc001b08e70) (0xc00020f360) Create stream I0322 13:56:53.596498 6 log.go:172] (0xc001b08e70) (0xc00020f360) Stream added, broadcasting: 1 I0322 13:56:53.599437 6 log.go:172] (0xc001b08e70) Reply frame received for 1 I0322 13:56:53.599483 6 log.go:172] (0xc001b08e70) (0xc0027cd040) Create stream I0322 13:56:53.599498 6 log.go:172] (0xc001b08e70) (0xc0027cd040) Stream added, broadcasting: 3 I0322 13:56:53.600762 6 log.go:172] (0xc001b08e70) Reply frame received for 3 I0322 13:56:53.600808 6 log.go:172] (0xc001b08e70) (0xc001a50dc0) Create stream I0322 13:56:53.600826 6 log.go:172] (0xc001b08e70) (0xc001a50dc0) Stream added, broadcasting: 5 I0322 13:56:53.602180 6 log.go:172] (0xc001b08e70) Reply frame received for 5 I0322 13:56:53.672715 6 log.go:172] (0xc001b08e70) Data frame received for 3 I0322 13:56:53.672753 6 log.go:172] (0xc0027cd040) (3) Data frame handling I0322 13:56:53.672761 6 log.go:172] (0xc0027cd040) (3) Data frame sent I0322 13:56:53.672766 6 log.go:172] (0xc001b08e70) Data frame received for 3 I0322 13:56:53.672771 6 log.go:172] (0xc0027cd040) (3) Data frame handling I0322 13:56:53.672789 6 log.go:172] (0xc001b08e70) Data frame received for 5 I0322 13:56:53.672797 6 log.go:172] (0xc001a50dc0) (5) Data frame handling I0322 13:56:53.674570 6 log.go:172] (0xc001b08e70) Data frame received for 1 I0322 13:56:53.674587 6 log.go:172] (0xc00020f360) (1) Data frame handling I0322 13:56:53.674594 6 log.go:172] (0xc00020f360) (1) Data frame sent I0322 13:56:53.674603 6 log.go:172] (0xc001b08e70) (0xc00020f360) Stream removed, broadcasting: 1 I0322 13:56:53.674637 6 log.go:172] (0xc001b08e70) Go away received I0322 13:56:53.674668 6 log.go:172] (0xc001b08e70) (0xc00020f360) Stream removed, broadcasting: 1 I0322 13:56:53.674677 6 log.go:172] (0xc001b08e70) (0xc0027cd040) Stream removed, broadcasting: 3 I0322 13:56:53.674692 6 log.go:172] (0xc001b08e70) (0xc001a50dc0) Stream removed, broadcasting: 5 Mar 22 13:56:53.674: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:56:53.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8250" for this suite. Mar 22 13:57:15.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:57:15.773: INFO: namespace pod-network-test-8250 deletion completed in 22.093755977s • [SLOW TEST:46.503 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:57:15.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Mar 22 13:57:15.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4229 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 22 13:57:21.896: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0322 13:57:21.833662 985 log.go:172] (0xc0007a60b0) (0xc0009ca140) Create stream\nI0322 13:57:21.833704 985 log.go:172] (0xc0007a60b0) (0xc0009ca140) Stream added, broadcasting: 1\nI0322 13:57:21.836840 985 log.go:172] (0xc0007a60b0) Reply frame received for 1\nI0322 13:57:21.836879 985 log.go:172] (0xc0007a60b0) (0xc000692140) Create stream\nI0322 13:57:21.836891 985 log.go:172] (0xc0007a60b0) (0xc000692140) Stream added, broadcasting: 3\nI0322 13:57:21.838356 985 log.go:172] (0xc0007a60b0) Reply frame received for 3\nI0322 13:57:21.838402 985 log.go:172] (0xc0007a60b0) (0xc0002b79a0) Create stream\nI0322 13:57:21.838416 985 log.go:172] (0xc0007a60b0) (0xc0002b79a0) Stream added, broadcasting: 5\nI0322 13:57:21.839346 985 log.go:172] (0xc0007a60b0) Reply frame received for 5\nI0322 13:57:21.839392 985 log.go:172] (0xc0007a60b0) (0xc000692280) Create stream\nI0322 13:57:21.839404 985 log.go:172] (0xc0007a60b0) (0xc000692280) Stream added, broadcasting: 7\nI0322 13:57:21.840359 985 log.go:172] (0xc0007a60b0) Reply frame received for 7\nI0322 13:57:21.840506 985 log.go:172] (0xc000692140) (3) Writing data frame\nI0322 13:57:21.840606 985 log.go:172] (0xc000692140) (3) Writing data frame\nI0322 13:57:21.841616 985 log.go:172] (0xc0007a60b0) Data frame received for 5\nI0322 13:57:21.841634 985 log.go:172] (0xc0002b79a0) (5) Data frame handling\nI0322 13:57:21.841643 985 log.go:172] (0xc0002b79a0) (5) Data frame sent\nI0322 13:57:21.842339 985 log.go:172] (0xc0007a60b0) Data frame received for 5\nI0322 13:57:21.842362 985 log.go:172] (0xc0002b79a0) (5) Data frame handling\nI0322 13:57:21.842377 985 log.go:172] (0xc0002b79a0) (5) Data frame sent\nI0322 13:57:21.880340 985 log.go:172] (0xc0007a60b0) Data frame received for 5\nI0322 13:57:21.880379 985 log.go:172] (0xc0002b79a0) (5) Data frame handling\nI0322 13:57:21.880415 985 log.go:172] (0xc0007a60b0) Data frame received for 7\nI0322 13:57:21.880435 985 log.go:172] (0xc000692280) (7) Data frame handling\nI0322 13:57:21.880649 985 log.go:172] (0xc0007a60b0) Data frame received for 1\nI0322 13:57:21.880687 985 log.go:172] (0xc0009ca140) (1) Data frame handling\nI0322 13:57:21.880722 985 log.go:172] (0xc0009ca140) (1) Data frame sent\nI0322 13:57:21.880748 985 log.go:172] (0xc0007a60b0) (0xc0009ca140) Stream removed, broadcasting: 1\nI0322 13:57:21.880864 985 log.go:172] (0xc0007a60b0) (0xc0009ca140) Stream removed, broadcasting: 1\nI0322 13:57:21.880945 985 log.go:172] (0xc0007a60b0) (0xc000692140) Stream removed, broadcasting: 3\nI0322 13:57:21.880970 985 log.go:172] (0xc0007a60b0) (0xc0002b79a0) Stream removed, broadcasting: 5\nI0322 13:57:21.880977 985 log.go:172] (0xc0007a60b0) (0xc000692280) Stream removed, broadcasting: 7\nI0322 13:57:21.881002 985 log.go:172] (0xc0007a60b0) (0xc000692140) Stream removed, broadcasting: 3\nI0322 13:57:21.881031 985 log.go:172] (0xc0007a60b0) Go away received\n" Mar 22 13:57:21.896: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:57:23.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4229" for this suite. Mar 22 13:57:33.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:57:34.042: INFO: namespace kubectl-4229 deletion completed in 10.122831523s • [SLOW TEST:18.268 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:57:34.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 22 13:57:38.620: INFO: Successfully updated pod "pod-update-3a66ed8f-8698-475e-a43a-4f0d0500c85d" STEP: verifying the updated pod is in kubernetes Mar 22 13:57:38.626: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:57:38.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3341" for this suite. Mar 22 13:58:00.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:58:00.720: INFO: namespace pods-3341 deletion completed in 22.091112381s • [SLOW TEST:26.677 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:58:00.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 22 13:58:00.783: INFO: Waiting up to 5m0s for pod "pod-ea409090-6bcf-4337-bfa3-b1adba8c7d51" in namespace "emptydir-8510" to be "success or failure" Mar 22 13:58:00.787: INFO: Pod "pod-ea409090-6bcf-4337-bfa3-b1adba8c7d51": Phase="Pending", Reason="", readiness=false. Elapsed: 3.574274ms Mar 22 13:58:02.804: INFO: Pod "pod-ea409090-6bcf-4337-bfa3-b1adba8c7d51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020571186s Mar 22 13:58:04.807: INFO: Pod "pod-ea409090-6bcf-4337-bfa3-b1adba8c7d51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023943591s STEP: Saw pod success Mar 22 13:58:04.807: INFO: Pod "pod-ea409090-6bcf-4337-bfa3-b1adba8c7d51" satisfied condition "success or failure" Mar 22 13:58:04.810: INFO: Trying to get logs from node iruya-worker pod pod-ea409090-6bcf-4337-bfa3-b1adba8c7d51 container test-container: STEP: delete the pod Mar 22 13:58:04.824: INFO: Waiting for pod pod-ea409090-6bcf-4337-bfa3-b1adba8c7d51 to disappear Mar 22 13:58:04.850: INFO: Pod pod-ea409090-6bcf-4337-bfa3-b1adba8c7d51 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:58:04.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8510" for this suite. Mar 22 13:58:10.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:58:10.967: INFO: namespace emptydir-8510 deletion completed in 6.113983875s • [SLOW TEST:10.247 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:58:10.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9653 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 22 13:58:11.006: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 22 13:58:37.120: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.10:8080/dial?request=hostName&protocol=http&host=10.244.1.9&port=8080&tries=1'] Namespace:pod-network-test-9653 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 13:58:37.120: INFO: >>> kubeConfig: /root/.kube/config I0322 13:58:37.162106 6 log.go:172] (0xc0020be4d0) (0xc002168f00) Create stream I0322 13:58:37.162143 6 log.go:172] (0xc0020be4d0) (0xc002168f00) Stream added, broadcasting: 1 I0322 13:58:37.164075 6 log.go:172] (0xc0020be4d0) Reply frame received for 1 I0322 13:58:37.164122 6 log.go:172] (0xc0020be4d0) (0xc001308460) Create stream I0322 13:58:37.164132 6 log.go:172] (0xc0020be4d0) (0xc001308460) Stream added, broadcasting: 3 I0322 13:58:37.164952 6 log.go:172] (0xc0020be4d0) Reply frame received for 3 I0322 13:58:37.165017 6 log.go:172] (0xc0020be4d0) (0xc002169040) Create stream I0322 13:58:37.165040 6 log.go:172] (0xc0020be4d0) (0xc002169040) Stream added, broadcasting: 5 I0322 13:58:37.165953 6 log.go:172] (0xc0020be4d0) Reply frame received for 5 I0322 13:58:37.259062 6 log.go:172] (0xc0020be4d0) Data frame received for 3 I0322 13:58:37.259087 6 log.go:172] (0xc001308460) (3) Data frame handling I0322 13:58:37.259100 6 log.go:172] (0xc001308460) (3) Data frame sent I0322 13:58:37.259439 6 log.go:172] (0xc0020be4d0) Data frame received for 3 I0322 13:58:37.259463 6 log.go:172] (0xc001308460) (3) Data frame handling I0322 13:58:37.259481 6 log.go:172] (0xc0020be4d0) Data frame received for 5 I0322 13:58:37.259502 6 log.go:172] (0xc002169040) (5) Data frame handling I0322 13:58:37.260721 6 log.go:172] (0xc0020be4d0) Data frame received for 1 I0322 13:58:37.260741 6 log.go:172] (0xc002168f00) (1) Data frame handling I0322 13:58:37.260757 6 log.go:172] (0xc002168f00) (1) Data frame sent I0322 13:58:37.260776 6 log.go:172] (0xc0020be4d0) (0xc002168f00) Stream removed, broadcasting: 1 I0322 13:58:37.260791 6 log.go:172] (0xc0020be4d0) Go away received I0322 13:58:37.260883 6 log.go:172] (0xc0020be4d0) (0xc002168f00) Stream removed, broadcasting: 1 I0322 13:58:37.260905 6 log.go:172] (0xc0020be4d0) (0xc001308460) Stream removed, broadcasting: 3 I0322 13:58:37.260924 6 log.go:172] (0xc0020be4d0) (0xc002169040) Stream removed, broadcasting: 5 Mar 22 13:58:37.260: INFO: Waiting for endpoints: map[] Mar 22 13:58:37.263: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.10:8080/dial?request=hostName&protocol=http&host=10.244.2.115&port=8080&tries=1'] Namespace:pod-network-test-9653 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 13:58:37.263: INFO: >>> kubeConfig: /root/.kube/config I0322 13:58:37.291379 6 log.go:172] (0xc0020bf130) (0xc002169540) Create stream I0322 13:58:37.291405 6 log.go:172] (0xc0020bf130) (0xc002169540) Stream added, broadcasting: 1 I0322 13:58:37.293265 6 log.go:172] (0xc0020bf130) Reply frame received for 1 I0322 13:58:37.293321 6 log.go:172] (0xc0020bf130) (0xc002d6b9a0) Create stream I0322 13:58:37.293347 6 log.go:172] (0xc0020bf130) (0xc002d6b9a0) Stream added, broadcasting: 3 I0322 13:58:37.294365 6 log.go:172] (0xc0020bf130) Reply frame received for 3 I0322 13:58:37.294390 6 log.go:172] (0xc0020bf130) (0xc0021695e0) Create stream I0322 13:58:37.294398 6 log.go:172] (0xc0020bf130) (0xc0021695e0) Stream added, broadcasting: 5 I0322 13:58:37.295370 6 log.go:172] (0xc0020bf130) Reply frame received for 5 I0322 13:58:37.359644 6 log.go:172] (0xc0020bf130) Data frame received for 3 I0322 13:58:37.359684 6 log.go:172] (0xc002d6b9a0) (3) Data frame handling I0322 13:58:37.359717 6 log.go:172] (0xc002d6b9a0) (3) Data frame sent I0322 13:58:37.360321 6 log.go:172] (0xc0020bf130) Data frame received for 3 I0322 13:58:37.360351 6 log.go:172] (0xc002d6b9a0) (3) Data frame handling I0322 13:58:37.360472 6 log.go:172] (0xc0020bf130) Data frame received for 5 I0322 13:58:37.360488 6 log.go:172] (0xc0021695e0) (5) Data frame handling I0322 13:58:37.362318 6 log.go:172] (0xc0020bf130) Data frame received for 1 I0322 13:58:37.362334 6 log.go:172] (0xc002169540) (1) Data frame handling I0322 13:58:37.362344 6 log.go:172] (0xc002169540) (1) Data frame sent I0322 13:58:37.362357 6 log.go:172] (0xc0020bf130) (0xc002169540) Stream removed, broadcasting: 1 I0322 13:58:37.362367 6 log.go:172] (0xc0020bf130) Go away received I0322 13:58:37.362479 6 log.go:172] (0xc0020bf130) (0xc002169540) Stream removed, broadcasting: 1 I0322 13:58:37.362500 6 log.go:172] (0xc0020bf130) (0xc002d6b9a0) Stream removed, broadcasting: 3 I0322 13:58:37.362508 6 log.go:172] (0xc0020bf130) (0xc0021695e0) Stream removed, broadcasting: 5 Mar 22 13:58:37.362: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:58:37.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9653" for this suite. Mar 22 13:58:59.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:58:59.451: INFO: namespace pod-network-test-9653 deletion completed in 22.085009529s • [SLOW TEST:48.484 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:58:59.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:59:05.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4812" for this suite. Mar 22 13:59:11.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:59:11.802: INFO: namespace namespaces-4812 deletion completed in 6.095984513s STEP: Destroying namespace "nsdeletetest-3385" for this suite. Mar 22 13:59:11.804: INFO: Namespace nsdeletetest-3385 was already deleted STEP: Destroying namespace "nsdeletetest-8151" for this suite. Mar 22 13:59:17.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 13:59:17.898: INFO: namespace nsdeletetest-8151 deletion completed in 6.093253137s • [SLOW TEST:18.446 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 13:59:17.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8006 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 22 13:59:17.936: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 22 13:59:38.152: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.116 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8006 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 13:59:38.152: INFO: >>> kubeConfig: /root/.kube/config I0322 13:59:38.173627 6 log.go:172] (0xc00077fef0) (0xc000c78fa0) Create stream I0322 13:59:38.173658 6 log.go:172] (0xc00077fef0) (0xc000c78fa0) Stream added, broadcasting: 1 I0322 13:59:38.175112 6 log.go:172] (0xc00077fef0) Reply frame received for 1 I0322 13:59:38.175137 6 log.go:172] (0xc00077fef0) (0xc000c79220) Create stream I0322 13:59:38.175146 6 log.go:172] (0xc00077fef0) (0xc000c79220) Stream added, broadcasting: 3 I0322 13:59:38.175849 6 log.go:172] (0xc00077fef0) Reply frame received for 3 I0322 13:59:38.175900 6 log.go:172] (0xc00077fef0) (0xc000c79360) Create stream I0322 13:59:38.175914 6 log.go:172] (0xc00077fef0) (0xc000c79360) Stream added, broadcasting: 5 I0322 13:59:38.176621 6 log.go:172] (0xc00077fef0) Reply frame received for 5 I0322 13:59:39.250555 6 log.go:172] (0xc00077fef0) Data frame received for 5 I0322 13:59:39.250599 6 log.go:172] (0xc000c79360) (5) Data frame handling I0322 13:59:39.250631 6 log.go:172] (0xc00077fef0) Data frame received for 3 I0322 13:59:39.250644 6 log.go:172] (0xc000c79220) (3) Data frame handling I0322 13:59:39.250659 6 log.go:172] (0xc000c79220) (3) Data frame sent I0322 13:59:39.250671 6 log.go:172] (0xc00077fef0) Data frame received for 3 I0322 13:59:39.250683 6 log.go:172] (0xc000c79220) (3) Data frame handling I0322 13:59:39.252512 6 log.go:172] (0xc00077fef0) Data frame received for 1 I0322 13:59:39.252534 6 log.go:172] (0xc000c78fa0) (1) Data frame handling I0322 13:59:39.252556 6 log.go:172] (0xc000c78fa0) (1) Data frame sent I0322 13:59:39.252591 6 log.go:172] (0xc00077fef0) (0xc000c78fa0) Stream removed, broadcasting: 1 I0322 13:59:39.252667 6 log.go:172] (0xc00077fef0) Go away received I0322 13:59:39.252697 6 log.go:172] (0xc00077fef0) (0xc000c78fa0) Stream removed, broadcasting: 1 I0322 13:59:39.252714 6 log.go:172] (0xc00077fef0) (0xc000c79220) Stream removed, broadcasting: 3 I0322 13:59:39.252753 6 log.go:172] (0xc00077fef0) (0xc000c79360) Stream removed, broadcasting: 5 Mar 22 13:59:39.252: INFO: Found all expected endpoints: [netserver-0] Mar 22 13:59:39.256: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8006 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 13:59:39.256: INFO: >>> kubeConfig: /root/.kube/config I0322 13:59:39.295179 6 log.go:172] (0xc00078ec60) (0xc000c79900) Create stream I0322 13:59:39.295206 6 log.go:172] (0xc00078ec60) (0xc000c79900) Stream added, broadcasting: 1 I0322 13:59:39.297297 6 log.go:172] (0xc00078ec60) Reply frame received for 1 I0322 13:59:39.297362 6 log.go:172] (0xc00078ec60) (0xc003304000) Create stream I0322 13:59:39.297379 6 log.go:172] (0xc00078ec60) (0xc003304000) Stream added, broadcasting: 3 I0322 13:59:39.298571 6 log.go:172] (0xc00078ec60) Reply frame received for 3 I0322 13:59:39.298632 6 log.go:172] (0xc00078ec60) (0xc000c79d60) Create stream I0322 13:59:39.298660 6 log.go:172] (0xc00078ec60) (0xc000c79d60) Stream added, broadcasting: 5 I0322 13:59:39.299763 6 log.go:172] (0xc00078ec60) Reply frame received for 5 I0322 13:59:40.376373 6 log.go:172] (0xc00078ec60) Data frame received for 3 I0322 13:59:40.376423 6 log.go:172] (0xc003304000) (3) Data frame handling I0322 13:59:40.376491 6 log.go:172] (0xc003304000) (3) Data frame sent I0322 13:59:40.376682 6 log.go:172] (0xc00078ec60) Data frame received for 5 I0322 13:59:40.376735 6 log.go:172] (0xc000c79d60) (5) Data frame handling I0322 13:59:40.376814 6 log.go:172] (0xc00078ec60) Data frame received for 3 I0322 13:59:40.376845 6 log.go:172] (0xc003304000) (3) Data frame handling I0322 13:59:40.378573 6 log.go:172] (0xc00078ec60) Data frame received for 1 I0322 13:59:40.378607 6 log.go:172] (0xc000c79900) (1) Data frame handling I0322 13:59:40.378651 6 log.go:172] (0xc000c79900) (1) Data frame sent I0322 13:59:40.378680 6 log.go:172] (0xc00078ec60) (0xc000c79900) Stream removed, broadcasting: 1 I0322 13:59:40.378796 6 log.go:172] (0xc00078ec60) (0xc000c79900) Stream removed, broadcasting: 1 I0322 13:59:40.378810 6 log.go:172] (0xc00078ec60) (0xc003304000) Stream removed, broadcasting: 3 I0322 13:59:40.378907 6 log.go:172] (0xc00078ec60) Go away received I0322 13:59:40.379061 6 log.go:172] (0xc00078ec60) (0xc000c79d60) Stream removed, broadcasting: 5 Mar 22 13:59:40.379: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 13:59:40.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8006" for this suite. Mar 22 14:00:02.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:00:02.508: INFO: namespace pod-network-test-8006 deletion completed in 22.124446054s • [SLOW TEST:44.610 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:00:02.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 14:00:02.559: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 22 14:00:04.602: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:00:05.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7018" for this suite. Mar 22 14:00:11.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:00:11.907: INFO: namespace replication-controller-7018 deletion completed in 6.280957729s • [SLOW TEST:9.398 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:00:11.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 22 14:00:11.971: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 22 14:00:12.009: INFO: Waiting for terminating namespaces to be deleted... Mar 22 14:00:12.012: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 22 14:00:12.019: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 22 14:00:12.019: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 14:00:12.019: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 22 14:00:12.019: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 14:00:12.019: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 22 14:00:12.025: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Mar 22 14:00:12.025: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 14:00:12.025: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Mar 22 14:00:12.025: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 14:00:12.025: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Mar 22 14:00:12.025: INFO: Container coredns ready: true, restart count 0 Mar 22 14:00:12.025: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Mar 22 14:00:12.025: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fea5112b7d7c1d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:00:13.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5163" for this suite. Mar 22 14:00:19.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:00:19.167: INFO: namespace sched-pred-5163 deletion completed in 6.114199232s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.260 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:00:19.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:00:46.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3000" for this suite. Mar 22 14:00:52.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:00:52.841: INFO: namespace container-runtime-3000 deletion completed in 6.089644635s • [SLOW TEST:33.674 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:00:52.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 22 14:00:52.917: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:00:52.921: INFO: Number of nodes with available pods: 0 Mar 22 14:00:52.921: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:00:53.926: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:00:53.930: INFO: Number of nodes with available pods: 0 Mar 22 14:00:53.930: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:00:54.926: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:00:54.929: INFO: Number of nodes with available pods: 0 Mar 22 14:00:54.929: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:00:55.932: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:00:55.935: INFO: Number of nodes with available pods: 0 Mar 22 14:00:55.935: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:00:56.927: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:00:56.930: INFO: Number of nodes with available pods: 2 Mar 22 14:00:56.930: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 22 14:00:56.947: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:00:56.952: INFO: Number of nodes with available pods: 2 Mar 22 14:00:56.952: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7871, will wait for the garbage collector to delete the pods Mar 22 14:00:58.040: INFO: Deleting DaemonSet.extensions daemon-set took: 6.011142ms Mar 22 14:00:58.340: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.240348ms Mar 22 14:01:11.944: INFO: Number of nodes with available pods: 0 Mar 22 14:01:11.944: INFO: Number of running nodes: 0, number of available pods: 0 Mar 22 14:01:11.946: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7871/daemonsets","resourceVersion":"1247225"},"items":null} Mar 22 14:01:11.949: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7871/pods","resourceVersion":"1247225"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:01:11.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7871" for this suite. Mar 22 14:01:17.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:01:18.055: INFO: namespace daemonsets-7871 deletion completed in 6.093644539s • [SLOW TEST:25.214 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:01:18.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-39d0bbd0-a395-41cf-be73-28f54bb49926 STEP: Creating a pod to test consume configMaps Mar 22 14:01:18.133: INFO: Waiting up to 5m0s for pod "pod-configmaps-332e4154-8cd4-4fcd-8da5-69a1957a1d14" in namespace "configmap-7399" to be "success or failure" Mar 22 14:01:18.171: INFO: Pod "pod-configmaps-332e4154-8cd4-4fcd-8da5-69a1957a1d14": Phase="Pending", Reason="", readiness=false. Elapsed: 37.82015ms Mar 22 14:01:20.175: INFO: Pod "pod-configmaps-332e4154-8cd4-4fcd-8da5-69a1957a1d14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041764939s Mar 22 14:01:22.180: INFO: Pod "pod-configmaps-332e4154-8cd4-4fcd-8da5-69a1957a1d14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04665062s STEP: Saw pod success Mar 22 14:01:22.180: INFO: Pod "pod-configmaps-332e4154-8cd4-4fcd-8da5-69a1957a1d14" satisfied condition "success or failure" Mar 22 14:01:22.183: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-332e4154-8cd4-4fcd-8da5-69a1957a1d14 container configmap-volume-test: STEP: delete the pod Mar 22 14:01:22.199: INFO: Waiting for pod pod-configmaps-332e4154-8cd4-4fcd-8da5-69a1957a1d14 to disappear Mar 22 14:01:22.203: INFO: Pod pod-configmaps-332e4154-8cd4-4fcd-8da5-69a1957a1d14 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:01:22.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7399" for this suite. Mar 22 14:01:28.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:01:28.322: INFO: namespace configmap-7399 deletion completed in 6.114997852s • [SLOW TEST:10.266 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:01:28.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Mar 22 14:01:28.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8034' Mar 22 14:01:28.666: INFO: stderr: "" Mar 22 14:01:28.666: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 22 14:01:28.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8034' Mar 22 14:01:28.761: INFO: stderr: "" Mar 22 14:01:28.761: INFO: stdout: "update-demo-nautilus-2bvl7 update-demo-nautilus-8nqmv " Mar 22 14:01:28.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2bvl7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8034' Mar 22 14:01:29.056: INFO: stderr: "" Mar 22 14:01:29.056: INFO: stdout: "" Mar 22 14:01:29.056: INFO: update-demo-nautilus-2bvl7 is created but not running Mar 22 14:01:34.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8034' Mar 22 14:01:34.148: INFO: stderr: "" Mar 22 14:01:34.148: INFO: stdout: "update-demo-nautilus-2bvl7 update-demo-nautilus-8nqmv " Mar 22 14:01:34.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2bvl7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8034' Mar 22 14:01:34.233: INFO: stderr: "" Mar 22 14:01:34.233: INFO: stdout: "true" Mar 22 14:01:34.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2bvl7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8034' Mar 22 14:01:34.327: INFO: stderr: "" Mar 22 14:01:34.327: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 22 14:01:34.327: INFO: validating pod update-demo-nautilus-2bvl7 Mar 22 14:01:34.331: INFO: got data: { "image": "nautilus.jpg" } Mar 22 14:01:34.331: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 22 14:01:34.331: INFO: update-demo-nautilus-2bvl7 is verified up and running Mar 22 14:01:34.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nqmv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8034' Mar 22 14:01:34.421: INFO: stderr: "" Mar 22 14:01:34.421: INFO: stdout: "true" Mar 22 14:01:34.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nqmv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8034' Mar 22 14:01:34.518: INFO: stderr: "" Mar 22 14:01:34.518: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 22 14:01:34.518: INFO: validating pod update-demo-nautilus-8nqmv Mar 22 14:01:34.521: INFO: got data: { "image": "nautilus.jpg" } Mar 22 14:01:34.521: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 22 14:01:34.521: INFO: update-demo-nautilus-8nqmv is verified up and running STEP: using delete to clean up resources Mar 22 14:01:34.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8034' Mar 22 14:01:34.614: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 22 14:01:34.614: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 22 14:01:34.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8034' Mar 22 14:01:34.716: INFO: stderr: "No resources found.\n" Mar 22 14:01:34.716: INFO: stdout: "" Mar 22 14:01:34.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8034 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 22 14:01:34.820: INFO: stderr: "" Mar 22 14:01:34.820: INFO: stdout: "update-demo-nautilus-2bvl7\nupdate-demo-nautilus-8nqmv\n" Mar 22 14:01:35.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8034' Mar 22 14:01:35.572: INFO: stderr: "No resources found.\n" Mar 22 14:01:35.572: INFO: stdout: "" Mar 22 14:01:35.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8034 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 22 14:01:35.698: INFO: stderr: "" Mar 22 14:01:35.698: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:01:35.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8034" for this suite. Mar 22 14:01:57.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:01:57.826: INFO: namespace kubectl-8034 deletion completed in 22.125018896s • [SLOW TEST:29.504 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:01:57.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 22 14:01:57.916: INFO: Waiting up to 5m0s for pod "pod-8b06cdea-157d-4a56-828b-b2471eef5080" in namespace "emptydir-6740" to be "success or failure" Mar 22 14:01:57.923: INFO: Pod "pod-8b06cdea-157d-4a56-828b-b2471eef5080": Phase="Pending", Reason="", readiness=false. Elapsed: 6.571125ms Mar 22 14:01:59.927: INFO: Pod "pod-8b06cdea-157d-4a56-828b-b2471eef5080": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011056722s Mar 22 14:02:01.931: INFO: Pod "pod-8b06cdea-157d-4a56-828b-b2471eef5080": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014619505s STEP: Saw pod success Mar 22 14:02:01.931: INFO: Pod "pod-8b06cdea-157d-4a56-828b-b2471eef5080" satisfied condition "success or failure" Mar 22 14:02:01.934: INFO: Trying to get logs from node iruya-worker2 pod pod-8b06cdea-157d-4a56-828b-b2471eef5080 container test-container: STEP: delete the pod Mar 22 14:02:02.000: INFO: Waiting for pod pod-8b06cdea-157d-4a56-828b-b2471eef5080 to disappear Mar 22 14:02:02.007: INFO: Pod pod-8b06cdea-157d-4a56-828b-b2471eef5080 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:02:02.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6740" for this suite. Mar 22 14:02:08.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:02:08.101: INFO: namespace emptydir-6740 deletion completed in 6.091862116s • [SLOW TEST:10.274 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:02:08.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 14:02:08.195: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 22 14:02:08.206: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:08.211: INFO: Number of nodes with available pods: 0 Mar 22 14:02:08.211: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:02:09.214: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:09.217: INFO: Number of nodes with available pods: 0 Mar 22 14:02:09.217: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:02:10.215: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:10.218: INFO: Number of nodes with available pods: 0 Mar 22 14:02:10.218: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:02:11.215: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:11.219: INFO: Number of nodes with available pods: 0 Mar 22 14:02:11.219: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:02:12.227: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:12.230: INFO: Number of nodes with available pods: 2 Mar 22 14:02:12.230: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 22 14:02:12.271: INFO: Wrong image for pod: daemon-set-hq266. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:12.271: INFO: Wrong image for pod: daemon-set-j56j4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:12.278: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:13.282: INFO: Wrong image for pod: daemon-set-hq266. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:13.283: INFO: Wrong image for pod: daemon-set-j56j4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:13.287: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:14.282: INFO: Wrong image for pod: daemon-set-hq266. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:14.282: INFO: Wrong image for pod: daemon-set-j56j4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:14.286: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:15.282: INFO: Wrong image for pod: daemon-set-hq266. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:15.282: INFO: Pod daemon-set-hq266 is not available Mar 22 14:02:15.282: INFO: Wrong image for pod: daemon-set-j56j4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:15.286: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:16.282: INFO: Wrong image for pod: daemon-set-hq266. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:16.282: INFO: Pod daemon-set-hq266 is not available Mar 22 14:02:16.282: INFO: Wrong image for pod: daemon-set-j56j4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:16.307: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:17.282: INFO: Wrong image for pod: daemon-set-hq266. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:17.282: INFO: Pod daemon-set-hq266 is not available Mar 22 14:02:17.282: INFO: Wrong image for pod: daemon-set-j56j4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:17.286: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:18.282: INFO: Wrong image for pod: daemon-set-hq266. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:18.282: INFO: Pod daemon-set-hq266 is not available Mar 22 14:02:18.282: INFO: Wrong image for pod: daemon-set-j56j4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:18.286: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:19.282: INFO: Wrong image for pod: daemon-set-hq266. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:19.282: INFO: Pod daemon-set-hq266 is not available Mar 22 14:02:19.282: INFO: Wrong image for pod: daemon-set-j56j4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:19.286: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:20.282: INFO: Wrong image for pod: daemon-set-hq266. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:20.282: INFO: Pod daemon-set-hq266 is not available Mar 22 14:02:20.282: INFO: Wrong image for pod: daemon-set-j56j4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:20.286: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:21.282: INFO: Wrong image for pod: daemon-set-hq266. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:21.282: INFO: Pod daemon-set-hq266 is not available Mar 22 14:02:21.282: INFO: Wrong image for pod: daemon-set-j56j4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:21.286: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:22.282: INFO: Wrong image for pod: daemon-set-j56j4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:22.282: INFO: Pod daemon-set-nwtf5 is not available Mar 22 14:02:22.285: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:23.282: INFO: Wrong image for pod: daemon-set-j56j4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:23.282: INFO: Pod daemon-set-nwtf5 is not available Mar 22 14:02:23.287: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:24.283: INFO: Wrong image for pod: daemon-set-j56j4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:24.283: INFO: Pod daemon-set-nwtf5 is not available Mar 22 14:02:24.286: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:25.282: INFO: Wrong image for pod: daemon-set-j56j4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:25.286: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:26.282: INFO: Wrong image for pod: daemon-set-j56j4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 22 14:02:26.282: INFO: Pod daemon-set-j56j4 is not available Mar 22 14:02:26.287: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:27.282: INFO: Pod daemon-set-m6tmt is not available Mar 22 14:02:27.285: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 22 14:02:27.289: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:27.292: INFO: Number of nodes with available pods: 1 Mar 22 14:02:27.292: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:02:28.296: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:28.299: INFO: Number of nodes with available pods: 1 Mar 22 14:02:28.299: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:02:29.297: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:29.301: INFO: Number of nodes with available pods: 1 Mar 22 14:02:29.301: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:02:30.297: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:02:30.301: INFO: Number of nodes with available pods: 2 Mar 22 14:02:30.301: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2708, will wait for the garbage collector to delete the pods Mar 22 14:02:30.373: INFO: Deleting DaemonSet.extensions daemon-set took: 6.619588ms Mar 22 14:02:30.673: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.350038ms Mar 22 14:02:42.190: INFO: Number of nodes with available pods: 0 Mar 22 14:02:42.190: INFO: Number of running nodes: 0, number of available pods: 0 Mar 22 14:02:42.193: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2708/daemonsets","resourceVersion":"1247606"},"items":null} Mar 22 14:02:42.199: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2708/pods","resourceVersion":"1247606"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:02:42.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2708" for this suite. Mar 22 14:02:48.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:02:48.322: INFO: namespace daemonsets-2708 deletion completed in 6.094534385s • [SLOW TEST:40.221 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:02:48.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-nclp STEP: Creating a pod to test atomic-volume-subpath Mar 22 14:02:48.406: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-nclp" in namespace "subpath-9711" to be "success or failure" Mar 22 14:02:48.454: INFO: Pod "pod-subpath-test-projected-nclp": Phase="Pending", Reason="", readiness=false. Elapsed: 48.201672ms Mar 22 14:02:50.458: INFO: Pod "pod-subpath-test-projected-nclp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05210786s Mar 22 14:02:52.462: INFO: Pod "pod-subpath-test-projected-nclp": Phase="Running", Reason="", readiness=true. Elapsed: 4.056200042s Mar 22 14:02:54.466: INFO: Pod "pod-subpath-test-projected-nclp": Phase="Running", Reason="", readiness=true. Elapsed: 6.060437744s Mar 22 14:02:56.470: INFO: Pod "pod-subpath-test-projected-nclp": Phase="Running", Reason="", readiness=true. Elapsed: 8.064300285s Mar 22 14:02:58.475: INFO: Pod "pod-subpath-test-projected-nclp": Phase="Running", Reason="", readiness=true. Elapsed: 10.068627009s Mar 22 14:03:00.479: INFO: Pod "pod-subpath-test-projected-nclp": Phase="Running", Reason="", readiness=true. Elapsed: 12.073156219s Mar 22 14:03:02.483: INFO: Pod "pod-subpath-test-projected-nclp": Phase="Running", Reason="", readiness=true. Elapsed: 14.077179759s Mar 22 14:03:04.488: INFO: Pod "pod-subpath-test-projected-nclp": Phase="Running", Reason="", readiness=true. Elapsed: 16.081906616s Mar 22 14:03:06.492: INFO: Pod "pod-subpath-test-projected-nclp": Phase="Running", Reason="", readiness=true. Elapsed: 18.085744349s Mar 22 14:03:08.496: INFO: Pod "pod-subpath-test-projected-nclp": Phase="Running", Reason="", readiness=true. Elapsed: 20.090371178s Mar 22 14:03:10.501: INFO: Pod "pod-subpath-test-projected-nclp": Phase="Running", Reason="", readiness=true. Elapsed: 22.09490652s Mar 22 14:03:12.505: INFO: Pod "pod-subpath-test-projected-nclp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.099400473s STEP: Saw pod success Mar 22 14:03:12.505: INFO: Pod "pod-subpath-test-projected-nclp" satisfied condition "success or failure" Mar 22 14:03:12.508: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-nclp container test-container-subpath-projected-nclp: STEP: delete the pod Mar 22 14:03:12.544: INFO: Waiting for pod pod-subpath-test-projected-nclp to disappear Mar 22 14:03:12.553: INFO: Pod pod-subpath-test-projected-nclp no longer exists STEP: Deleting pod pod-subpath-test-projected-nclp Mar 22 14:03:12.553: INFO: Deleting pod "pod-subpath-test-projected-nclp" in namespace "subpath-9711" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:03:12.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9711" for this suite. Mar 22 14:03:18.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:03:18.649: INFO: namespace subpath-9711 deletion completed in 6.090703703s • [SLOW TEST:30.327 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:03:18.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 22 14:03:18.705: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98d7918e-eee8-422d-8779-d9df8e766859" in namespace "projected-3275" to be "success or failure" Mar 22 14:03:18.736: INFO: Pod "downwardapi-volume-98d7918e-eee8-422d-8779-d9df8e766859": Phase="Pending", Reason="", readiness=false. Elapsed: 30.821843ms Mar 22 14:03:20.740: INFO: Pod "downwardapi-volume-98d7918e-eee8-422d-8779-d9df8e766859": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034347995s Mar 22 14:03:22.744: INFO: Pod "downwardapi-volume-98d7918e-eee8-422d-8779-d9df8e766859": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038688531s STEP: Saw pod success Mar 22 14:03:22.744: INFO: Pod "downwardapi-volume-98d7918e-eee8-422d-8779-d9df8e766859" satisfied condition "success or failure" Mar 22 14:03:22.748: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-98d7918e-eee8-422d-8779-d9df8e766859 container client-container: STEP: delete the pod Mar 22 14:03:22.779: INFO: Waiting for pod downwardapi-volume-98d7918e-eee8-422d-8779-d9df8e766859 to disappear Mar 22 14:03:22.793: INFO: Pod downwardapi-volume-98d7918e-eee8-422d-8779-d9df8e766859 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:03:22.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3275" for this suite. Mar 22 14:03:28.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:03:28.899: INFO: namespace projected-3275 deletion completed in 6.101794025s • [SLOW TEST:10.250 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:03:28.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-caef84f0-b57e-448e-88fe-5211d9b998ff STEP: Creating a pod to test consume configMaps Mar 22 14:03:28.982: INFO: Waiting up to 5m0s for pod "pod-configmaps-2078d744-0752-4779-b5fc-597eb2caee75" in namespace "configmap-4701" to be "success or failure" Mar 22 14:03:28.985: INFO: Pod "pod-configmaps-2078d744-0752-4779-b5fc-597eb2caee75": Phase="Pending", Reason="", readiness=false. Elapsed: 3.345278ms Mar 22 14:03:30.988: INFO: Pod "pod-configmaps-2078d744-0752-4779-b5fc-597eb2caee75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006704855s Mar 22 14:03:32.992: INFO: Pod "pod-configmaps-2078d744-0752-4779-b5fc-597eb2caee75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010772817s STEP: Saw pod success Mar 22 14:03:32.993: INFO: Pod "pod-configmaps-2078d744-0752-4779-b5fc-597eb2caee75" satisfied condition "success or failure" Mar 22 14:03:32.995: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-2078d744-0752-4779-b5fc-597eb2caee75 container configmap-volume-test: STEP: delete the pod Mar 22 14:03:33.017: INFO: Waiting for pod pod-configmaps-2078d744-0752-4779-b5fc-597eb2caee75 to disappear Mar 22 14:03:33.021: INFO: Pod pod-configmaps-2078d744-0752-4779-b5fc-597eb2caee75 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:03:33.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4701" for this suite. Mar 22 14:03:39.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:03:39.114: INFO: namespace configmap-4701 deletion completed in 6.090067964s • [SLOW TEST:10.214 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:03:39.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-839195d9-3ace-4710-aafc-93290eed17dd STEP: Creating a pod to test consume secrets Mar 22 14:03:39.180: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d3b5f459-2266-4121-aa9e-46de1be7a1f3" in namespace "projected-8053" to be "success or failure" Mar 22 14:03:39.183: INFO: Pod "pod-projected-secrets-d3b5f459-2266-4121-aa9e-46de1be7a1f3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.128866ms Mar 22 14:03:41.198: INFO: Pod "pod-projected-secrets-d3b5f459-2266-4121-aa9e-46de1be7a1f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018423469s Mar 22 14:03:43.202: INFO: Pod "pod-projected-secrets-d3b5f459-2266-4121-aa9e-46de1be7a1f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022677261s STEP: Saw pod success Mar 22 14:03:43.202: INFO: Pod "pod-projected-secrets-d3b5f459-2266-4121-aa9e-46de1be7a1f3" satisfied condition "success or failure" Mar 22 14:03:43.206: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-d3b5f459-2266-4121-aa9e-46de1be7a1f3 container secret-volume-test: STEP: delete the pod Mar 22 14:03:43.238: INFO: Waiting for pod pod-projected-secrets-d3b5f459-2266-4121-aa9e-46de1be7a1f3 to disappear Mar 22 14:03:43.248: INFO: Pod pod-projected-secrets-d3b5f459-2266-4121-aa9e-46de1be7a1f3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:03:43.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8053" for this suite. Mar 22 14:03:49.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:03:49.341: INFO: namespace projected-8053 deletion completed in 6.090182746s • [SLOW TEST:10.228 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:03:49.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0322 14:04:19.958278 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 22 14:04:19.958: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:04:19.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9386" for this suite. Mar 22 14:04:26.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:04:26.066: INFO: namespace gc-9386 deletion completed in 6.104377688s • [SLOW TEST:36.724 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:04:26.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 22 14:04:26.146: INFO: Waiting up to 5m0s for pod "downwardapi-volume-56c45ace-71e9-4414-81aa-80688b69bd7d" in namespace "projected-6833" to be "success or failure" Mar 22 14:04:26.153: INFO: Pod "downwardapi-volume-56c45ace-71e9-4414-81aa-80688b69bd7d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.134117ms Mar 22 14:04:28.157: INFO: Pod "downwardapi-volume-56c45ace-71e9-4414-81aa-80688b69bd7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010825243s Mar 22 14:04:30.161: INFO: Pod "downwardapi-volume-56c45ace-71e9-4414-81aa-80688b69bd7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014953592s STEP: Saw pod success Mar 22 14:04:30.161: INFO: Pod "downwardapi-volume-56c45ace-71e9-4414-81aa-80688b69bd7d" satisfied condition "success or failure" Mar 22 14:04:30.164: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-56c45ace-71e9-4414-81aa-80688b69bd7d container client-container: STEP: delete the pod Mar 22 14:04:30.184: INFO: Waiting for pod downwardapi-volume-56c45ace-71e9-4414-81aa-80688b69bd7d to disappear Mar 22 14:04:30.188: INFO: Pod downwardapi-volume-56c45ace-71e9-4414-81aa-80688b69bd7d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:04:30.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6833" for this suite. Mar 22 14:04:36.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:04:36.283: INFO: namespace projected-6833 deletion completed in 6.091958232s • [SLOW TEST:10.217 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:04:36.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Mar 22 14:04:36.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3847' Mar 22 14:04:36.608: INFO: stderr: "" Mar 22 14:04:36.608: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Mar 22 14:04:37.613: INFO: Selector matched 1 pods for map[app:redis] Mar 22 14:04:37.613: INFO: Found 0 / 1 Mar 22 14:04:38.617: INFO: Selector matched 1 pods for map[app:redis] Mar 22 14:04:38.617: INFO: Found 0 / 1 Mar 22 14:04:39.613: INFO: Selector matched 1 pods for map[app:redis] Mar 22 14:04:39.613: INFO: Found 0 / 1 Mar 22 14:04:40.623: INFO: Selector matched 1 pods for map[app:redis] Mar 22 14:04:40.623: INFO: Found 1 / 1 Mar 22 14:04:40.623: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 22 14:04:40.627: INFO: Selector matched 1 pods for map[app:redis] Mar 22 14:04:40.627: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Mar 22 14:04:40.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-stg5v redis-master --namespace=kubectl-3847' Mar 22 14:04:40.744: INFO: stderr: "" Mar 22 14:04:40.744: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Mar 14:04:38.876 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Mar 14:04:38.876 # Server started, Redis version 3.2.12\n1:M 22 Mar 14:04:38.876 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Mar 14:04:38.876 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Mar 22 14:04:40.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-stg5v redis-master --namespace=kubectl-3847 --tail=1' Mar 22 14:04:40.850: INFO: stderr: "" Mar 22 14:04:40.850: INFO: stdout: "1:M 22 Mar 14:04:38.876 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Mar 22 14:04:40.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-stg5v redis-master --namespace=kubectl-3847 --limit-bytes=1' Mar 22 14:04:40.949: INFO: stderr: "" Mar 22 14:04:40.949: INFO: stdout: " " STEP: exposing timestamps Mar 22 14:04:40.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-stg5v redis-master --namespace=kubectl-3847 --tail=1 --timestamps' Mar 22 14:04:41.055: INFO: stderr: "" Mar 22 14:04:41.055: INFO: stdout: "2020-03-22T14:04:38.876873507Z 1:M 22 Mar 14:04:38.876 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Mar 22 14:04:43.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-stg5v redis-master --namespace=kubectl-3847 --since=1s' Mar 22 14:04:43.655: INFO: stderr: "" Mar 22 14:04:43.655: INFO: stdout: "" Mar 22 14:04:43.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-stg5v redis-master --namespace=kubectl-3847 --since=24h' Mar 22 14:04:43.764: INFO: stderr: "" Mar 22 14:04:43.764: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Mar 14:04:38.876 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Mar 14:04:38.876 # Server started, Redis version 3.2.12\n1:M 22 Mar 14:04:38.876 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Mar 14:04:38.876 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Mar 22 14:04:43.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3847' Mar 22 14:04:43.857: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 22 14:04:43.857: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Mar 22 14:04:43.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-3847' Mar 22 14:04:43.955: INFO: stderr: "No resources found.\n" Mar 22 14:04:43.955: INFO: stdout: "" Mar 22 14:04:43.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-3847 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 22 14:04:44.050: INFO: stderr: "" Mar 22 14:04:44.050: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:04:44.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3847" for this suite. Mar 22 14:05:06.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:05:06.239: INFO: namespace kubectl-3847 deletion completed in 22.185523525s • [SLOW TEST:29.955 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:05:06.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2643 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Mar 22 14:05:06.298: INFO: Found 0 stateful pods, waiting for 3 Mar 22 14:05:16.303: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 22 14:05:16.303: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 22 14:05:16.303: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 22 14:05:16.332: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 22 14:05:26.371: INFO: Updating stateful set ss2 Mar 22 14:05:26.382: INFO: Waiting for Pod statefulset-2643/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Mar 22 14:05:36.527: INFO: Found 2 stateful pods, waiting for 3 Mar 22 14:05:46.538: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 22 14:05:46.538: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 22 14:05:46.538: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 22 14:05:46.563: INFO: Updating stateful set ss2 Mar 22 14:05:46.612: INFO: Waiting for Pod statefulset-2643/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 22 14:05:56.637: INFO: Updating stateful set ss2 Mar 22 14:05:56.643: INFO: Waiting for StatefulSet statefulset-2643/ss2 to complete update Mar 22 14:05:56.643: INFO: Waiting for Pod statefulset-2643/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 22 14:06:06.651: INFO: Deleting all statefulset in ns statefulset-2643 Mar 22 14:06:06.655: INFO: Scaling statefulset ss2 to 0 Mar 22 14:06:26.670: INFO: Waiting for statefulset status.replicas updated to 0 Mar 22 14:06:26.673: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:06:26.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2643" for this suite. Mar 22 14:06:32.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:06:32.794: INFO: namespace statefulset-2643 deletion completed in 6.102886377s • [SLOW TEST:86.555 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:06:32.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6578 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6578 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6578 Mar 22 14:06:32.855: INFO: Found 0 stateful pods, waiting for 1 Mar 22 14:06:42.860: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 22 14:06:42.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6578 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 22 14:06:43.126: INFO: stderr: "I0322 14:06:43.006436 1487 log.go:172] (0xc0009d44d0) (0xc0003ae820) Create stream\nI0322 14:06:43.006499 1487 log.go:172] (0xc0009d44d0) (0xc0003ae820) Stream added, broadcasting: 1\nI0322 14:06:43.009332 1487 log.go:172] (0xc0009d44d0) Reply frame received for 1\nI0322 14:06:43.009436 1487 log.go:172] (0xc0009d44d0) (0xc000948000) Create stream\nI0322 14:06:43.009501 1487 log.go:172] (0xc0009d44d0) (0xc000948000) Stream added, broadcasting: 3\nI0322 14:06:43.011627 1487 log.go:172] (0xc0009d44d0) Reply frame received for 3\nI0322 14:06:43.011669 1487 log.go:172] (0xc0009d44d0) (0xc0009480a0) Create stream\nI0322 14:06:43.011685 1487 log.go:172] (0xc0009d44d0) (0xc0009480a0) Stream added, broadcasting: 5\nI0322 14:06:43.012633 1487 log.go:172] (0xc0009d44d0) Reply frame received for 5\nI0322 14:06:43.088569 1487 log.go:172] (0xc0009d44d0) Data frame received for 5\nI0322 14:06:43.088611 1487 log.go:172] (0xc0009480a0) (5) Data frame handling\nI0322 14:06:43.088634 1487 log.go:172] (0xc0009480a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0322 14:06:43.119678 1487 log.go:172] (0xc0009d44d0) Data frame received for 3\nI0322 14:06:43.119718 1487 log.go:172] (0xc000948000) (3) Data frame handling\nI0322 14:06:43.119746 1487 log.go:172] (0xc000948000) (3) Data frame sent\nI0322 14:06:43.120154 1487 log.go:172] (0xc0009d44d0) Data frame received for 5\nI0322 14:06:43.120193 1487 log.go:172] (0xc0009480a0) (5) Data frame handling\nI0322 14:06:43.120215 1487 log.go:172] (0xc0009d44d0) Data frame received for 3\nI0322 14:06:43.120234 1487 log.go:172] (0xc000948000) (3) Data frame handling\nI0322 14:06:43.122370 1487 log.go:172] (0xc0009d44d0) Data frame received for 1\nI0322 14:06:43.122385 1487 log.go:172] (0xc0003ae820) (1) Data frame handling\nI0322 14:06:43.122399 1487 log.go:172] (0xc0003ae820) (1) Data frame sent\nI0322 14:06:43.122411 1487 log.go:172] (0xc0009d44d0) (0xc0003ae820) Stream removed, broadcasting: 1\nI0322 14:06:43.122616 1487 log.go:172] (0xc0009d44d0) Go away received\nI0322 14:06:43.122661 1487 log.go:172] (0xc0009d44d0) (0xc0003ae820) Stream removed, broadcasting: 1\nI0322 14:06:43.122685 1487 log.go:172] (0xc0009d44d0) (0xc000948000) Stream removed, broadcasting: 3\nI0322 14:06:43.122695 1487 log.go:172] (0xc0009d44d0) (0xc0009480a0) Stream removed, broadcasting: 5\n" Mar 22 14:06:43.126: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 22 14:06:43.126: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 22 14:06:43.130: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 22 14:06:53.135: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 22 14:06:53.135: INFO: Waiting for statefulset status.replicas updated to 0 Mar 22 14:06:53.151: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999512s Mar 22 14:06:54.156: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994254894s Mar 22 14:06:55.161: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.989634541s Mar 22 14:06:56.165: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.984684537s Mar 22 14:06:57.170: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.980570239s Mar 22 14:06:58.175: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.975928931s Mar 22 14:06:59.179: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.970946238s Mar 22 14:07:00.184: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.966307987s Mar 22 14:07:01.189: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.961722566s Mar 22 14:07:02.212: INFO: Verifying statefulset ss doesn't scale past 1 for another 956.203331ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6578 Mar 22 14:07:03.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:07:03.461: INFO: stderr: "I0322 14:07:03.353843 1509 log.go:172] (0xc0009fc420) (0xc0003b2820) Create stream\nI0322 14:07:03.353918 1509 log.go:172] (0xc0009fc420) (0xc0003b2820) Stream added, broadcasting: 1\nI0322 14:07:03.360715 1509 log.go:172] (0xc0009fc420) Reply frame received for 1\nI0322 14:07:03.360757 1509 log.go:172] (0xc0009fc420) (0xc000856000) Create stream\nI0322 14:07:03.361357 1509 log.go:172] (0xc0009fc420) (0xc000856000) Stream added, broadcasting: 3\nI0322 14:07:03.362747 1509 log.go:172] (0xc0009fc420) Reply frame received for 3\nI0322 14:07:03.362788 1509 log.go:172] (0xc0009fc420) (0xc00068c3c0) Create stream\nI0322 14:07:03.362797 1509 log.go:172] (0xc0009fc420) (0xc00068c3c0) Stream added, broadcasting: 5\nI0322 14:07:03.363573 1509 log.go:172] (0xc0009fc420) Reply frame received for 5\nI0322 14:07:03.453751 1509 log.go:172] (0xc0009fc420) Data frame received for 3\nI0322 14:07:03.453782 1509 log.go:172] (0xc000856000) (3) Data frame handling\nI0322 14:07:03.453814 1509 log.go:172] (0xc000856000) (3) Data frame sent\nI0322 14:07:03.454075 1509 log.go:172] (0xc0009fc420) Data frame received for 5\nI0322 14:07:03.454152 1509 log.go:172] (0xc00068c3c0) (5) Data frame handling\nI0322 14:07:03.454183 1509 log.go:172] (0xc00068c3c0) (5) Data frame sent\nI0322 14:07:03.454201 1509 log.go:172] (0xc0009fc420) Data frame received for 5\nI0322 14:07:03.454215 1509 log.go:172] (0xc00068c3c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0322 14:07:03.454249 1509 log.go:172] (0xc0009fc420) Data frame received for 3\nI0322 14:07:03.454288 1509 log.go:172] (0xc000856000) (3) Data frame handling\nI0322 14:07:03.455808 1509 log.go:172] (0xc0009fc420) Data frame received for 1\nI0322 14:07:03.455845 1509 log.go:172] (0xc0003b2820) (1) Data frame handling\nI0322 14:07:03.455868 1509 log.go:172] (0xc0003b2820) (1) Data frame sent\nI0322 14:07:03.455891 1509 log.go:172] (0xc0009fc420) (0xc0003b2820) Stream removed, broadcasting: 1\nI0322 14:07:03.455929 1509 log.go:172] (0xc0009fc420) Go away received\nI0322 14:07:03.456627 1509 log.go:172] (0xc0009fc420) (0xc0003b2820) Stream removed, broadcasting: 1\nI0322 14:07:03.456675 1509 log.go:172] (0xc0009fc420) (0xc000856000) Stream removed, broadcasting: 3\nI0322 14:07:03.456702 1509 log.go:172] (0xc0009fc420) (0xc00068c3c0) Stream removed, broadcasting: 5\n" Mar 22 14:07:03.461: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 22 14:07:03.461: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 22 14:07:03.465: INFO: Found 1 stateful pods, waiting for 3 Mar 22 14:07:13.469: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 22 14:07:13.469: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 22 14:07:13.469: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 22 14:07:13.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6578 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 22 14:07:13.698: INFO: stderr: "I0322 14:07:13.610814 1529 log.go:172] (0xc000ab8370) (0xc00097e6e0) Create stream\nI0322 14:07:13.610877 1529 log.go:172] (0xc000ab8370) (0xc00097e6e0) Stream added, broadcasting: 1\nI0322 14:07:13.613729 1529 log.go:172] (0xc000ab8370) Reply frame received for 1\nI0322 14:07:13.613789 1529 log.go:172] (0xc000ab8370) (0xc0006a0320) Create stream\nI0322 14:07:13.613804 1529 log.go:172] (0xc000ab8370) (0xc0006a0320) Stream added, broadcasting: 3\nI0322 14:07:13.614937 1529 log.go:172] (0xc000ab8370) Reply frame received for 3\nI0322 14:07:13.615001 1529 log.go:172] (0xc000ab8370) (0xc0008e2000) Create stream\nI0322 14:07:13.615037 1529 log.go:172] (0xc000ab8370) (0xc0008e2000) Stream added, broadcasting: 5\nI0322 14:07:13.616018 1529 log.go:172] (0xc000ab8370) Reply frame received for 5\nI0322 14:07:13.691409 1529 log.go:172] (0xc000ab8370) Data frame received for 3\nI0322 14:07:13.691464 1529 log.go:172] (0xc0006a0320) (3) Data frame handling\nI0322 14:07:13.691488 1529 log.go:172] (0xc0006a0320) (3) Data frame sent\nI0322 14:07:13.691507 1529 log.go:172] (0xc000ab8370) Data frame received for 3\nI0322 14:07:13.691524 1529 log.go:172] (0xc0006a0320) (3) Data frame handling\nI0322 14:07:13.691543 1529 log.go:172] (0xc000ab8370) Data frame received for 5\nI0322 14:07:13.691560 1529 log.go:172] (0xc0008e2000) (5) Data frame handling\nI0322 14:07:13.691578 1529 log.go:172] (0xc0008e2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0322 14:07:13.691606 1529 log.go:172] (0xc000ab8370) Data frame received for 5\nI0322 14:07:13.691660 1529 log.go:172] (0xc0008e2000) (5) Data frame handling\nI0322 14:07:13.693499 1529 log.go:172] (0xc000ab8370) Data frame received for 1\nI0322 14:07:13.693537 1529 log.go:172] (0xc00097e6e0) (1) Data frame handling\nI0322 14:07:13.693572 1529 log.go:172] (0xc00097e6e0) (1) Data frame sent\nI0322 14:07:13.693602 1529 log.go:172] (0xc000ab8370) (0xc00097e6e0) Stream removed, broadcasting: 1\nI0322 14:07:13.693695 1529 log.go:172] (0xc000ab8370) Go away received\nI0322 14:07:13.694077 1529 log.go:172] (0xc000ab8370) (0xc00097e6e0) Stream removed, broadcasting: 1\nI0322 14:07:13.694100 1529 log.go:172] (0xc000ab8370) (0xc0006a0320) Stream removed, broadcasting: 3\nI0322 14:07:13.694110 1529 log.go:172] (0xc000ab8370) (0xc0008e2000) Stream removed, broadcasting: 5\n" Mar 22 14:07:13.698: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 22 14:07:13.698: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 22 14:07:13.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6578 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 22 14:07:13.933: INFO: stderr: "I0322 14:07:13.821297 1551 log.go:172] (0xc000116dc0) (0xc0006ee640) Create stream\nI0322 14:07:13.821363 1551 log.go:172] (0xc000116dc0) (0xc0006ee640) Stream added, broadcasting: 1\nI0322 14:07:13.826429 1551 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0322 14:07:13.826482 1551 log.go:172] (0xc000116dc0) (0xc0007da140) Create stream\nI0322 14:07:13.826503 1551 log.go:172] (0xc000116dc0) (0xc0007da140) Stream added, broadcasting: 3\nI0322 14:07:13.829771 1551 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0322 14:07:13.829812 1551 log.go:172] (0xc000116dc0) (0xc0006ee6e0) Create stream\nI0322 14:07:13.829821 1551 log.go:172] (0xc000116dc0) (0xc0006ee6e0) Stream added, broadcasting: 5\nI0322 14:07:13.830884 1551 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0322 14:07:13.892270 1551 log.go:172] (0xc000116dc0) Data frame received for 5\nI0322 14:07:13.892299 1551 log.go:172] (0xc0006ee6e0) (5) Data frame handling\nI0322 14:07:13.892319 1551 log.go:172] (0xc0006ee6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0322 14:07:13.926377 1551 log.go:172] (0xc000116dc0) Data frame received for 3\nI0322 14:07:13.926410 1551 log.go:172] (0xc0007da140) (3) Data frame handling\nI0322 14:07:13.926436 1551 log.go:172] (0xc0007da140) (3) Data frame sent\nI0322 14:07:13.926489 1551 log.go:172] (0xc000116dc0) Data frame received for 3\nI0322 14:07:13.926508 1551 log.go:172] (0xc0007da140) (3) Data frame handling\nI0322 14:07:13.926711 1551 log.go:172] (0xc000116dc0) Data frame received for 5\nI0322 14:07:13.926753 1551 log.go:172] (0xc0006ee6e0) (5) Data frame handling\nI0322 14:07:13.928149 1551 log.go:172] (0xc000116dc0) Data frame received for 1\nI0322 14:07:13.928186 1551 log.go:172] (0xc0006ee640) (1) Data frame handling\nI0322 14:07:13.928218 1551 log.go:172] (0xc0006ee640) (1) Data frame sent\nI0322 14:07:13.928250 1551 log.go:172] (0xc000116dc0) (0xc0006ee640) Stream removed, broadcasting: 1\nI0322 14:07:13.928281 1551 log.go:172] (0xc000116dc0) Go away received\nI0322 14:07:13.928668 1551 log.go:172] (0xc000116dc0) (0xc0006ee640) Stream removed, broadcasting: 1\nI0322 14:07:13.928691 1551 log.go:172] (0xc000116dc0) (0xc0007da140) Stream removed, broadcasting: 3\nI0322 14:07:13.928703 1551 log.go:172] (0xc000116dc0) (0xc0006ee6e0) Stream removed, broadcasting: 5\n" Mar 22 14:07:13.933: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 22 14:07:13.933: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 22 14:07:13.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6578 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 22 14:07:14.180: INFO: stderr: "I0322 14:07:14.064259 1571 log.go:172] (0xc00012afd0) (0xc0006eeaa0) Create stream\nI0322 14:07:14.064329 1571 log.go:172] (0xc00012afd0) (0xc0006eeaa0) Stream added, broadcasting: 1\nI0322 14:07:14.067103 1571 log.go:172] (0xc00012afd0) Reply frame received for 1\nI0322 14:07:14.067170 1571 log.go:172] (0xc00012afd0) (0xc000654000) Create stream\nI0322 14:07:14.067897 1571 log.go:172] (0xc00012afd0) (0xc000654000) Stream added, broadcasting: 3\nI0322 14:07:14.069752 1571 log.go:172] (0xc00012afd0) Reply frame received for 3\nI0322 14:07:14.069880 1571 log.go:172] (0xc00012afd0) (0xc00078a000) Create stream\nI0322 14:07:14.069966 1571 log.go:172] (0xc00012afd0) (0xc00078a000) Stream added, broadcasting: 5\nI0322 14:07:14.071066 1571 log.go:172] (0xc00012afd0) Reply frame received for 5\nI0322 14:07:14.122973 1571 log.go:172] (0xc00012afd0) Data frame received for 5\nI0322 14:07:14.123002 1571 log.go:172] (0xc00078a000) (5) Data frame handling\nI0322 14:07:14.123024 1571 log.go:172] (0xc00078a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0322 14:07:14.172308 1571 log.go:172] (0xc00012afd0) Data frame received for 3\nI0322 14:07:14.172356 1571 log.go:172] (0xc000654000) (3) Data frame handling\nI0322 14:07:14.172419 1571 log.go:172] (0xc000654000) (3) Data frame sent\nI0322 14:07:14.172446 1571 log.go:172] (0xc00012afd0) Data frame received for 3\nI0322 14:07:14.172467 1571 log.go:172] (0xc000654000) (3) Data frame handling\nI0322 14:07:14.172547 1571 log.go:172] (0xc00012afd0) Data frame received for 5\nI0322 14:07:14.172566 1571 log.go:172] (0xc00078a000) (5) Data frame handling\nI0322 14:07:14.174806 1571 log.go:172] (0xc00012afd0) Data frame received for 1\nI0322 14:07:14.174836 1571 log.go:172] (0xc0006eeaa0) (1) Data frame handling\nI0322 14:07:14.174851 1571 log.go:172] (0xc0006eeaa0) (1) Data frame sent\nI0322 14:07:14.174872 1571 log.go:172] (0xc00012afd0) (0xc0006eeaa0) Stream removed, broadcasting: 1\nI0322 14:07:14.174937 1571 log.go:172] (0xc00012afd0) Go away received\nI0322 14:07:14.175266 1571 log.go:172] (0xc00012afd0) (0xc0006eeaa0) Stream removed, broadcasting: 1\nI0322 14:07:14.175286 1571 log.go:172] (0xc00012afd0) (0xc000654000) Stream removed, broadcasting: 3\nI0322 14:07:14.175298 1571 log.go:172] (0xc00012afd0) (0xc00078a000) Stream removed, broadcasting: 5\n" Mar 22 14:07:14.180: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 22 14:07:14.180: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 22 14:07:14.180: INFO: Waiting for statefulset status.replicas updated to 0 Mar 22 14:07:14.183: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 22 14:07:24.192: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 22 14:07:24.192: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 22 14:07:24.192: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 22 14:07:24.206: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999542s Mar 22 14:07:25.211: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992533568s Mar 22 14:07:26.216: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987512473s Mar 22 14:07:27.222: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.98252231s Mar 22 14:07:28.226: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.977076842s Mar 22 14:07:29.232: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.972550348s Mar 22 14:07:30.237: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.96674425s Mar 22 14:07:31.243: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.961469315s Mar 22 14:07:32.247: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.955842694s Mar 22 14:07:33.252: INFO: Verifying statefulset ss doesn't scale past 3 for another 951.753846ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6578 Mar 22 14:07:34.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6578 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:07:36.963: INFO: stderr: "I0322 14:07:36.898169 1591 log.go:172] (0xc00079a420) (0xc000a94780) Create stream\nI0322 14:07:36.898195 1591 log.go:172] (0xc00079a420) (0xc000a94780) Stream added, broadcasting: 1\nI0322 14:07:36.900173 1591 log.go:172] (0xc00079a420) Reply frame received for 1\nI0322 14:07:36.900201 1591 log.go:172] (0xc00079a420) (0xc000b58000) Create stream\nI0322 14:07:36.900216 1591 log.go:172] (0xc00079a420) (0xc000b58000) Stream added, broadcasting: 3\nI0322 14:07:36.901405 1591 log.go:172] (0xc00079a420) Reply frame received for 3\nI0322 14:07:36.901464 1591 log.go:172] (0xc00079a420) (0xc0007c0140) Create stream\nI0322 14:07:36.901496 1591 log.go:172] (0xc00079a420) (0xc0007c0140) Stream added, broadcasting: 5\nI0322 14:07:36.902410 1591 log.go:172] (0xc00079a420) Reply frame received for 5\nI0322 14:07:36.956956 1591 log.go:172] (0xc00079a420) Data frame received for 3\nI0322 14:07:36.956991 1591 log.go:172] (0xc000b58000) (3) Data frame handling\nI0322 14:07:36.957012 1591 log.go:172] (0xc000b58000) (3) Data frame sent\nI0322 14:07:36.957027 1591 log.go:172] (0xc00079a420) Data frame received for 3\nI0322 14:07:36.957043 1591 log.go:172] (0xc000b58000) (3) Data frame handling\nI0322 14:07:36.957293 1591 log.go:172] (0xc00079a420) Data frame received for 5\nI0322 14:07:36.957324 1591 log.go:172] (0xc0007c0140) (5) Data frame handling\nI0322 14:07:36.957353 1591 log.go:172] (0xc0007c0140) (5) Data frame sent\nI0322 14:07:36.957363 1591 log.go:172] (0xc00079a420) Data frame received for 5\nI0322 14:07:36.957372 1591 log.go:172] (0xc0007c0140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0322 14:07:36.958682 1591 log.go:172] (0xc00079a420) Data frame received for 1\nI0322 14:07:36.958725 1591 log.go:172] (0xc000a94780) (1) Data frame handling\nI0322 14:07:36.958741 1591 log.go:172] (0xc000a94780) (1) Data frame sent\nI0322 14:07:36.958923 1591 log.go:172] (0xc00079a420) (0xc000a94780) Stream removed, broadcasting: 1\nI0322 14:07:36.959418 1591 log.go:172] (0xc00079a420) (0xc000a94780) Stream removed, broadcasting: 1\nI0322 14:07:36.959446 1591 log.go:172] (0xc00079a420) (0xc000b58000) Stream removed, broadcasting: 3\nI0322 14:07:36.959469 1591 log.go:172] (0xc00079a420) (0xc0007c0140) Stream removed, broadcasting: 5\n" Mar 22 14:07:36.963: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 22 14:07:36.963: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 22 14:07:36.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6578 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:07:37.167: INFO: stderr: "I0322 14:07:37.095458 1624 log.go:172] (0xc000916370) (0xc0009728c0) Create stream\nI0322 14:07:37.095522 1624 log.go:172] (0xc000916370) (0xc0009728c0) Stream added, broadcasting: 1\nI0322 14:07:37.098994 1624 log.go:172] (0xc000916370) Reply frame received for 1\nI0322 14:07:37.099043 1624 log.go:172] (0xc000916370) (0xc000972000) Create stream\nI0322 14:07:37.099063 1624 log.go:172] (0xc000916370) (0xc000972000) Stream added, broadcasting: 3\nI0322 14:07:37.099859 1624 log.go:172] (0xc000916370) Reply frame received for 3\nI0322 14:07:37.099909 1624 log.go:172] (0xc000916370) (0xc0005f61e0) Create stream\nI0322 14:07:37.099928 1624 log.go:172] (0xc000916370) (0xc0005f61e0) Stream added, broadcasting: 5\nI0322 14:07:37.100783 1624 log.go:172] (0xc000916370) Reply frame received for 5\nI0322 14:07:37.160929 1624 log.go:172] (0xc000916370) Data frame received for 3\nI0322 14:07:37.161098 1624 log.go:172] (0xc000972000) (3) Data frame handling\nI0322 14:07:37.161273 1624 log.go:172] (0xc000972000) (3) Data frame sent\nI0322 14:07:37.161311 1624 log.go:172] (0xc000916370) Data frame received for 3\nI0322 14:07:37.161330 1624 log.go:172] (0xc000972000) (3) Data frame handling\nI0322 14:07:37.161353 1624 log.go:172] (0xc000916370) Data frame received for 5\nI0322 14:07:37.161389 1624 log.go:172] (0xc0005f61e0) (5) Data frame handling\nI0322 14:07:37.161410 1624 log.go:172] (0xc0005f61e0) (5) Data frame sent\nI0322 14:07:37.161433 1624 log.go:172] (0xc000916370) Data frame received for 5\nI0322 14:07:37.161448 1624 log.go:172] (0xc0005f61e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0322 14:07:37.162894 1624 log.go:172] (0xc000916370) Data frame received for 1\nI0322 14:07:37.162925 1624 log.go:172] (0xc0009728c0) (1) Data frame handling\nI0322 14:07:37.162946 1624 log.go:172] (0xc0009728c0) (1) Data frame sent\nI0322 14:07:37.162971 1624 log.go:172] (0xc000916370) (0xc0009728c0) Stream removed, broadcasting: 1\nI0322 14:07:37.162995 1624 log.go:172] (0xc000916370) Go away received\nI0322 14:07:37.163430 1624 log.go:172] (0xc000916370) (0xc0009728c0) Stream removed, broadcasting: 1\nI0322 14:07:37.163457 1624 log.go:172] (0xc000916370) (0xc000972000) Stream removed, broadcasting: 3\nI0322 14:07:37.163469 1624 log.go:172] (0xc000916370) (0xc0005f61e0) Stream removed, broadcasting: 5\n" Mar 22 14:07:37.168: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 22 14:07:37.168: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 22 14:07:37.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6578 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:07:37.359: INFO: stderr: "I0322 14:07:37.294590 1645 log.go:172] (0xc000a664d0) (0xc0005f4820) Create stream\nI0322 14:07:37.294656 1645 log.go:172] (0xc000a664d0) (0xc0005f4820) Stream added, broadcasting: 1\nI0322 14:07:37.298489 1645 log.go:172] (0xc000a664d0) Reply frame received for 1\nI0322 14:07:37.298532 1645 log.go:172] (0xc000a664d0) (0xc0005f4000) Create stream\nI0322 14:07:37.298544 1645 log.go:172] (0xc000a664d0) (0xc0005f4000) Stream added, broadcasting: 3\nI0322 14:07:37.299609 1645 log.go:172] (0xc000a664d0) Reply frame received for 3\nI0322 14:07:37.299657 1645 log.go:172] (0xc000a664d0) (0xc0006001e0) Create stream\nI0322 14:07:37.299674 1645 log.go:172] (0xc000a664d0) (0xc0006001e0) Stream added, broadcasting: 5\nI0322 14:07:37.300702 1645 log.go:172] (0xc000a664d0) Reply frame received for 5\nI0322 14:07:37.353097 1645 log.go:172] (0xc000a664d0) Data frame received for 3\nI0322 14:07:37.353389 1645 log.go:172] (0xc0005f4000) (3) Data frame handling\nI0322 14:07:37.353414 1645 log.go:172] (0xc0005f4000) (3) Data frame sent\nI0322 14:07:37.353431 1645 log.go:172] (0xc000a664d0) Data frame received for 3\nI0322 14:07:37.353441 1645 log.go:172] (0xc0005f4000) (3) Data frame handling\nI0322 14:07:37.353491 1645 log.go:172] (0xc000a664d0) Data frame received for 5\nI0322 14:07:37.353529 1645 log.go:172] (0xc0006001e0) (5) Data frame handling\nI0322 14:07:37.353574 1645 log.go:172] (0xc0006001e0) (5) Data frame sent\nI0322 14:07:37.353596 1645 log.go:172] (0xc000a664d0) Data frame received for 5\nI0322 14:07:37.353619 1645 log.go:172] (0xc0006001e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0322 14:07:37.355013 1645 log.go:172] (0xc000a664d0) Data frame received for 1\nI0322 14:07:37.355050 1645 log.go:172] (0xc0005f4820) (1) Data frame handling\nI0322 14:07:37.355085 1645 log.go:172] (0xc0005f4820) (1) Data frame sent\nI0322 14:07:37.355115 1645 log.go:172] (0xc000a664d0) (0xc0005f4820) Stream removed, broadcasting: 1\nI0322 14:07:37.355145 1645 log.go:172] (0xc000a664d0) Go away received\nI0322 14:07:37.355455 1645 log.go:172] (0xc000a664d0) (0xc0005f4820) Stream removed, broadcasting: 1\nI0322 14:07:37.355474 1645 log.go:172] (0xc000a664d0) (0xc0005f4000) Stream removed, broadcasting: 3\nI0322 14:07:37.355483 1645 log.go:172] (0xc000a664d0) (0xc0006001e0) Stream removed, broadcasting: 5\n" Mar 22 14:07:37.359: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 22 14:07:37.360: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 22 14:07:37.360: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 22 14:08:07.376: INFO: Deleting all statefulset in ns statefulset-6578 Mar 22 14:08:07.379: INFO: Scaling statefulset ss to 0 Mar 22 14:08:07.386: INFO: Waiting for statefulset status.replicas updated to 0 Mar 22 14:08:07.388: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:08:07.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6578" for this suite. Mar 22 14:08:13.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:08:13.494: INFO: namespace statefulset-6578 deletion completed in 6.090868135s • [SLOW TEST:100.699 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:08:13.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 22 14:08:13.580: INFO: Waiting up to 5m0s for pod "pod-fc96e93b-114c-40bf-b8e5-db8a3a7a0bae" in namespace "emptydir-5904" to be "success or failure" Mar 22 14:08:13.584: INFO: Pod "pod-fc96e93b-114c-40bf-b8e5-db8a3a7a0bae": Phase="Pending", Reason="", readiness=false. Elapsed: 3.429674ms Mar 22 14:08:15.588: INFO: Pod "pod-fc96e93b-114c-40bf-b8e5-db8a3a7a0bae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007762774s Mar 22 14:08:17.592: INFO: Pod "pod-fc96e93b-114c-40bf-b8e5-db8a3a7a0bae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011610016s STEP: Saw pod success Mar 22 14:08:17.592: INFO: Pod "pod-fc96e93b-114c-40bf-b8e5-db8a3a7a0bae" satisfied condition "success or failure" Mar 22 14:08:17.595: INFO: Trying to get logs from node iruya-worker2 pod pod-fc96e93b-114c-40bf-b8e5-db8a3a7a0bae container test-container: STEP: delete the pod Mar 22 14:08:17.631: INFO: Waiting for pod pod-fc96e93b-114c-40bf-b8e5-db8a3a7a0bae to disappear Mar 22 14:08:17.662: INFO: Pod pod-fc96e93b-114c-40bf-b8e5-db8a3a7a0bae no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:08:17.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5904" for this suite. Mar 22 14:08:23.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:08:23.749: INFO: namespace emptydir-5904 deletion completed in 6.0833483s • [SLOW TEST:10.254 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:08:23.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-2cf6146c-e30c-45a6-a44e-f47baa6eb370 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:08:28.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3621" for this suite. Mar 22 14:08:50.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:08:50.288: INFO: namespace configmap-3621 deletion completed in 22.092584839s • [SLOW TEST:26.538 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:08:50.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 22 14:08:50.371: INFO: Waiting up to 5m0s for pod "pod-f15fd979-c925-4da4-bb76-94429e9b4eb5" in namespace "emptydir-6920" to be "success or failure" Mar 22 14:08:50.375: INFO: Pod "pod-f15fd979-c925-4da4-bb76-94429e9b4eb5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.616378ms Mar 22 14:08:52.379: INFO: Pod "pod-f15fd979-c925-4da4-bb76-94429e9b4eb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007390491s Mar 22 14:08:54.383: INFO: Pod "pod-f15fd979-c925-4da4-bb76-94429e9b4eb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011520067s STEP: Saw pod success Mar 22 14:08:54.383: INFO: Pod "pod-f15fd979-c925-4da4-bb76-94429e9b4eb5" satisfied condition "success or failure" Mar 22 14:08:54.386: INFO: Trying to get logs from node iruya-worker pod pod-f15fd979-c925-4da4-bb76-94429e9b4eb5 container test-container: STEP: delete the pod Mar 22 14:08:54.407: INFO: Waiting for pod pod-f15fd979-c925-4da4-bb76-94429e9b4eb5 to disappear Mar 22 14:08:54.411: INFO: Pod pod-f15fd979-c925-4da4-bb76-94429e9b4eb5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:08:54.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6920" for this suite. Mar 22 14:09:00.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:09:00.519: INFO: namespace emptydir-6920 deletion completed in 6.104986129s • [SLOW TEST:10.230 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:09:00.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Mar 22 14:09:00.607: INFO: Waiting up to 5m0s for pod "pod-bb4dd568-370d-4e30-952f-54827c3c9b15" in namespace "emptydir-1431" to be "success or failure" Mar 22 14:09:00.616: INFO: Pod "pod-bb4dd568-370d-4e30-952f-54827c3c9b15": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017534ms Mar 22 14:09:02.620: INFO: Pod "pod-bb4dd568-370d-4e30-952f-54827c3c9b15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012202379s Mar 22 14:09:04.624: INFO: Pod "pod-bb4dd568-370d-4e30-952f-54827c3c9b15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016806969s STEP: Saw pod success Mar 22 14:09:04.624: INFO: Pod "pod-bb4dd568-370d-4e30-952f-54827c3c9b15" satisfied condition "success or failure" Mar 22 14:09:04.628: INFO: Trying to get logs from node iruya-worker pod pod-bb4dd568-370d-4e30-952f-54827c3c9b15 container test-container: STEP: delete the pod Mar 22 14:09:04.694: INFO: Waiting for pod pod-bb4dd568-370d-4e30-952f-54827c3c9b15 to disappear Mar 22 14:09:04.699: INFO: Pod pod-bb4dd568-370d-4e30-952f-54827c3c9b15 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:09:04.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1431" for this suite. Mar 22 14:09:10.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:09:10.790: INFO: namespace emptydir-1431 deletion completed in 6.088559486s • [SLOW TEST:10.270 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:09:10.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9677.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9677.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 22 14:09:16.900: INFO: DNS probes using dns-9677/dns-test-71405fc4-f4eb-430b-be4d-15624c3df8f8 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:09:17.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9677" for this suite. Mar 22 14:09:23.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:09:23.110: INFO: namespace dns-9677 deletion completed in 6.100484596s • [SLOW TEST:12.319 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:09:23.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:09:23.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-964" for this suite. Mar 22 14:09:29.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:09:29.374: INFO: namespace kubelet-test-964 deletion completed in 6.090142631s • [SLOW TEST:6.264 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:09:29.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-aa8c5938-66c3-42ce-b967-abeba9873f03 in namespace container-probe-8474 Mar 22 14:09:33.459: INFO: Started pod liveness-aa8c5938-66c3-42ce-b967-abeba9873f03 in namespace container-probe-8474 STEP: checking the pod's current state and verifying that restartCount is present Mar 22 14:09:33.462: INFO: Initial restart count of pod liveness-aa8c5938-66c3-42ce-b967-abeba9873f03 is 0 Mar 22 14:09:57.525: INFO: Restart count of pod container-probe-8474/liveness-aa8c5938-66c3-42ce-b967-abeba9873f03 is now 1 (24.063018223s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:09:57.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8474" for this suite. Mar 22 14:10:03.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:10:03.629: INFO: namespace container-probe-8474 deletion completed in 6.087217833s • [SLOW TEST:34.254 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:10:03.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 22 14:10:03.668: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee2ee5a2-8dcb-4487-9a4a-d09157e19f06" in namespace "downward-api-2628" to be "success or failure" Mar 22 14:10:03.685: INFO: Pod "downwardapi-volume-ee2ee5a2-8dcb-4487-9a4a-d09157e19f06": Phase="Pending", Reason="", readiness=false. Elapsed: 16.040613ms Mar 22 14:10:05.688: INFO: Pod "downwardapi-volume-ee2ee5a2-8dcb-4487-9a4a-d09157e19f06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019574313s Mar 22 14:10:07.692: INFO: Pod "downwardapi-volume-ee2ee5a2-8dcb-4487-9a4a-d09157e19f06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023302553s STEP: Saw pod success Mar 22 14:10:07.692: INFO: Pod "downwardapi-volume-ee2ee5a2-8dcb-4487-9a4a-d09157e19f06" satisfied condition "success or failure" Mar 22 14:10:07.695: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ee2ee5a2-8dcb-4487-9a4a-d09157e19f06 container client-container: STEP: delete the pod Mar 22 14:10:07.730: INFO: Waiting for pod downwardapi-volume-ee2ee5a2-8dcb-4487-9a4a-d09157e19f06 to disappear Mar 22 14:10:07.745: INFO: Pod downwardapi-volume-ee2ee5a2-8dcb-4487-9a4a-d09157e19f06 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:10:07.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2628" for this suite. Mar 22 14:10:13.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:10:13.843: INFO: namespace downward-api-2628 deletion completed in 6.094475866s • [SLOW TEST:10.213 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:10:13.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Mar 22 14:10:13.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 22 14:10:14.079: INFO: stderr: "" Mar 22 14:10:14.079: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:10:14.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5225" for this suite. Mar 22 14:10:20.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:10:20.201: INFO: namespace kubectl-5225 deletion completed in 6.117708048s • [SLOW TEST:6.358 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:10:20.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 22 14:10:20.277: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b1f7d8a-8d52-4b37-9d08-7eb8d16322a0" in namespace "downward-api-6024" to be "success or failure" Mar 22 14:10:20.282: INFO: Pod "downwardapi-volume-7b1f7d8a-8d52-4b37-9d08-7eb8d16322a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.814436ms Mar 22 14:10:22.287: INFO: Pod "downwardapi-volume-7b1f7d8a-8d52-4b37-9d08-7eb8d16322a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009451006s Mar 22 14:10:24.299: INFO: Pod "downwardapi-volume-7b1f7d8a-8d52-4b37-9d08-7eb8d16322a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021427985s STEP: Saw pod success Mar 22 14:10:24.299: INFO: Pod "downwardapi-volume-7b1f7d8a-8d52-4b37-9d08-7eb8d16322a0" satisfied condition "success or failure" Mar 22 14:10:24.302: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7b1f7d8a-8d52-4b37-9d08-7eb8d16322a0 container client-container: STEP: delete the pod Mar 22 14:10:24.342: INFO: Waiting for pod downwardapi-volume-7b1f7d8a-8d52-4b37-9d08-7eb8d16322a0 to disappear Mar 22 14:10:24.354: INFO: Pod downwardapi-volume-7b1f7d8a-8d52-4b37-9d08-7eb8d16322a0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:10:24.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6024" for this suite. Mar 22 14:10:30.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:10:30.458: INFO: namespace downward-api-6024 deletion completed in 6.101595547s • [SLOW TEST:10.257 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:10:30.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 22 14:10:38.565: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 14:10:38.572: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 14:10:40.572: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 14:10:40.576: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 14:10:42.572: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 14:10:42.576: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 14:10:44.572: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 14:10:44.577: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 14:10:46.572: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 14:10:46.576: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 14:10:48.572: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 14:10:48.576: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 14:10:50.572: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 14:10:50.576: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 14:10:52.572: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 14:10:52.599: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:10:52.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5383" for this suite. Mar 22 14:11:14.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:11:14.688: INFO: namespace container-lifecycle-hook-5383 deletion completed in 22.085727638s • [SLOW TEST:44.229 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:11:14.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 22 14:11:14.735: INFO: PodSpec: initContainers in spec.initContainers Mar 22 14:12:00.427: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-4bf39aa9-24ce-4eb7-accb-d2c358c4c9c2", GenerateName:"", Namespace:"init-container-6659", SelfLink:"/api/v1/namespaces/init-container-6659/pods/pod-init-4bf39aa9-24ce-4eb7-accb-d2c358c4c9c2", UID:"1717e9c4-971c-4d40-ad75-3b6f2743d33a", ResourceVersion:"1249713", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720483074, loc:(*time.Location)(0x7ea78c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"735940615"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8p28d", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00316a180), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8p28d", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8p28d", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8p28d", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00296a278), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0027a00c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00296a300)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00296a320)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00296a328), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00296a32c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483074, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483074, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483074, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483074, loc:(*time.Location)(0x7ea78c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.139", StartTime:(*v1.Time)(0xc002fbe120), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002fbe160), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001fbc1c0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://3d47801cd2ff27ac52d196180576cb87ff9aa0e2bf7db322c857d2ca60826de6"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002fbe180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002fbe140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:12:00.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6659" for this suite. Mar 22 14:12:22.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:12:22.547: INFO: namespace init-container-6659 deletion completed in 22.110780482s • [SLOW TEST:67.859 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:12:22.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 14:12:22.622: INFO: Create a RollingUpdate DaemonSet Mar 22 14:12:22.641: INFO: Check that daemon pods launch on every node of the cluster Mar 22 14:12:22.645: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:12:22.682: INFO: Number of nodes with available pods: 0 Mar 22 14:12:22.682: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:12:23.687: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:12:23.690: INFO: Number of nodes with available pods: 0 Mar 22 14:12:23.690: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:12:24.688: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:12:24.691: INFO: Number of nodes with available pods: 0 Mar 22 14:12:24.691: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:12:25.805: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:12:25.825: INFO: Number of nodes with available pods: 0 Mar 22 14:12:25.825: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:12:26.687: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:12:26.691: INFO: Number of nodes with available pods: 0 Mar 22 14:12:26.691: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:12:27.688: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:12:27.692: INFO: Number of nodes with available pods: 2 Mar 22 14:12:27.692: INFO: Number of running nodes: 2, number of available pods: 2 Mar 22 14:12:27.692: INFO: Update the DaemonSet to trigger a rollout Mar 22 14:12:27.700: INFO: Updating DaemonSet daemon-set Mar 22 14:12:42.739: INFO: Roll back the DaemonSet before rollout is complete Mar 22 14:12:42.746: INFO: Updating DaemonSet daemon-set Mar 22 14:12:42.746: INFO: Make sure DaemonSet rollback is complete Mar 22 14:12:42.753: INFO: Wrong image for pod: daemon-set-wcshv. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 22 14:12:42.753: INFO: Pod daemon-set-wcshv is not available Mar 22 14:12:42.773: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:12:43.778: INFO: Wrong image for pod: daemon-set-wcshv. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 22 14:12:43.778: INFO: Pod daemon-set-wcshv is not available Mar 22 14:12:43.782: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:12:44.778: INFO: Wrong image for pod: daemon-set-wcshv. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 22 14:12:44.778: INFO: Pod daemon-set-wcshv is not available Mar 22 14:12:44.782: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 14:12:45.778: INFO: Pod daemon-set-rl448 is not available Mar 22 14:12:45.783: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9342, will wait for the garbage collector to delete the pods Mar 22 14:12:45.848: INFO: Deleting DaemonSet.extensions daemon-set took: 6.459021ms Mar 22 14:12:46.148: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.239927ms Mar 22 14:12:51.952: INFO: Number of nodes with available pods: 0 Mar 22 14:12:51.952: INFO: Number of running nodes: 0, number of available pods: 0 Mar 22 14:12:51.954: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9342/daemonsets","resourceVersion":"1249904"},"items":null} Mar 22 14:12:51.957: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9342/pods","resourceVersion":"1249904"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:12:51.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9342" for this suite. Mar 22 14:12:57.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:12:58.071: INFO: namespace daemonsets-9342 deletion completed in 6.100734895s • [SLOW TEST:35.524 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:12:58.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6487, will wait for the garbage collector to delete the pods Mar 22 14:13:02.261: INFO: Deleting Job.batch foo took: 6.316283ms Mar 22 14:13:02.562: INFO: Terminating Job.batch foo pods took: 300.243598ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:13:42.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6487" for this suite. Mar 22 14:13:48.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:13:48.270: INFO: namespace job-6487 deletion completed in 6.103968084s • [SLOW TEST:50.199 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:13:48.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-4f503f39-04a0-44d8-ad39-6275cdbf2be1 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:13:48.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9345" for this suite. Mar 22 14:13:54.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:13:54.423: INFO: namespace secrets-9345 deletion completed in 6.087072251s • [SLOW TEST:6.151 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:13:54.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 22 14:13:59.564: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:14:00.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5773" for this suite. Mar 22 14:14:22.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:14:22.810: INFO: namespace replicaset-5773 deletion completed in 22.198277623s • [SLOW TEST:28.387 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:14:22.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 22 14:14:22.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6515' Mar 22 14:14:22.939: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 22 14:14:22.939: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Mar 22 14:14:22.959: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-kjc7q] Mar 22 14:14:22.959: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-kjc7q" in namespace "kubectl-6515" to be "running and ready" Mar 22 14:14:22.964: INFO: Pod "e2e-test-nginx-rc-kjc7q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.985444ms Mar 22 14:14:24.968: INFO: Pod "e2e-test-nginx-rc-kjc7q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008548055s Mar 22 14:14:26.972: INFO: Pod "e2e-test-nginx-rc-kjc7q": Phase="Running", Reason="", readiness=true. Elapsed: 4.012381675s Mar 22 14:14:26.972: INFO: Pod "e2e-test-nginx-rc-kjc7q" satisfied condition "running and ready" Mar 22 14:14:26.972: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-kjc7q] Mar 22 14:14:26.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-6515' Mar 22 14:14:27.087: INFO: stderr: "" Mar 22 14:14:27.087: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Mar 22 14:14:27.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6515' Mar 22 14:14:27.205: INFO: stderr: "" Mar 22 14:14:27.205: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:14:27.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6515" for this suite. Mar 22 14:14:33.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:14:33.289: INFO: namespace kubectl-6515 deletion completed in 6.079046697s • [SLOW TEST:10.479 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:14:33.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 22 14:14:33.403: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-55,SelfLink:/api/v1/namespaces/watch-55/configmaps/e2e-watch-test-resource-version,UID:4c5f6eb5-ffae-4744-aed7-1d5f3d36e054,ResourceVersion:1250269,Generation:0,CreationTimestamp:2020-03-22 14:14:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 22 14:14:33.403: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-55,SelfLink:/api/v1/namespaces/watch-55/configmaps/e2e-watch-test-resource-version,UID:4c5f6eb5-ffae-4744-aed7-1d5f3d36e054,ResourceVersion:1250270,Generation:0,CreationTimestamp:2020-03-22 14:14:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:14:33.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-55" for this suite. Mar 22 14:14:39.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:14:39.510: INFO: namespace watch-55 deletion completed in 6.084493923s • [SLOW TEST:6.221 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:14:39.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 14:14:39.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 22 14:14:39.728: INFO: stderr: "" Mar 22 14:14:39.728: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.10\", GitCommit:\"1bea6c00a7055edef03f1d4bb58b773fa8917f11\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:12:55Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:14:39.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5854" for this suite. Mar 22 14:14:45.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:14:45.826: INFO: namespace kubectl-5854 deletion completed in 6.093022879s • [SLOW TEST:6.315 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:14:45.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 22 14:14:45.888: INFO: Waiting up to 5m0s for pod "downward-api-0e398dc2-e5b6-4c83-b2b3-fa325fbfe85a" in namespace "downward-api-3929" to be "success or failure" Mar 22 14:14:45.892: INFO: Pod "downward-api-0e398dc2-e5b6-4c83-b2b3-fa325fbfe85a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.605396ms Mar 22 14:14:47.913: INFO: Pod "downward-api-0e398dc2-e5b6-4c83-b2b3-fa325fbfe85a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025415198s Mar 22 14:14:49.917: INFO: Pod "downward-api-0e398dc2-e5b6-4c83-b2b3-fa325fbfe85a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029481482s STEP: Saw pod success Mar 22 14:14:49.917: INFO: Pod "downward-api-0e398dc2-e5b6-4c83-b2b3-fa325fbfe85a" satisfied condition "success or failure" Mar 22 14:14:49.920: INFO: Trying to get logs from node iruya-worker pod downward-api-0e398dc2-e5b6-4c83-b2b3-fa325fbfe85a container dapi-container: STEP: delete the pod Mar 22 14:14:49.956: INFO: Waiting for pod downward-api-0e398dc2-e5b6-4c83-b2b3-fa325fbfe85a to disappear Mar 22 14:14:49.971: INFO: Pod downward-api-0e398dc2-e5b6-4c83-b2b3-fa325fbfe85a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:14:49.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3929" for this suite. Mar 22 14:14:55.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:14:56.067: INFO: namespace downward-api-3929 deletion completed in 6.093321719s • [SLOW TEST:10.241 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:14:56.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-db00db33-9d90-41c2-9822-f7647131aa97 STEP: Creating a pod to test consume secrets Mar 22 14:14:56.132: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-70547d61-9376-437c-ab88-330d31378d9e" in namespace "projected-8608" to be "success or failure" Mar 22 14:14:56.147: INFO: Pod "pod-projected-secrets-70547d61-9376-437c-ab88-330d31378d9e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.648409ms Mar 22 14:14:58.151: INFO: Pod "pod-projected-secrets-70547d61-9376-437c-ab88-330d31378d9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018761708s Mar 22 14:15:00.156: INFO: Pod "pod-projected-secrets-70547d61-9376-437c-ab88-330d31378d9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023352374s STEP: Saw pod success Mar 22 14:15:00.156: INFO: Pod "pod-projected-secrets-70547d61-9376-437c-ab88-330d31378d9e" satisfied condition "success or failure" Mar 22 14:15:00.159: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-70547d61-9376-437c-ab88-330d31378d9e container projected-secret-volume-test: STEP: delete the pod Mar 22 14:15:00.192: INFO: Waiting for pod pod-projected-secrets-70547d61-9376-437c-ab88-330d31378d9e to disappear Mar 22 14:15:00.204: INFO: Pod pod-projected-secrets-70547d61-9376-437c-ab88-330d31378d9e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:15:00.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8608" for this suite. Mar 22 14:15:06.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:15:06.327: INFO: namespace projected-8608 deletion completed in 6.118560137s • [SLOW TEST:10.259 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:15:06.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:16:06.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8751" for this suite. Mar 22 14:16:28.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:16:28.494: INFO: namespace container-probe-8751 deletion completed in 22.099746393s • [SLOW TEST:82.167 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:16:28.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 22 14:16:28.549: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:16:35.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-891" for this suite. Mar 22 14:16:41.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:16:41.378: INFO: namespace init-container-891 deletion completed in 6.114756808s • [SLOW TEST:12.883 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:16:41.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 22 14:16:41.455: INFO: Waiting up to 5m0s for pod "downwardapi-volume-02a2b63f-d966-4c83-90f0-2ec498d3f9f9" in namespace "projected-3047" to be "success or failure" Mar 22 14:16:41.469: INFO: Pod "downwardapi-volume-02a2b63f-d966-4c83-90f0-2ec498d3f9f9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.885408ms Mar 22 14:16:43.472: INFO: Pod "downwardapi-volume-02a2b63f-d966-4c83-90f0-2ec498d3f9f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017631244s Mar 22 14:16:45.476: INFO: Pod "downwardapi-volume-02a2b63f-d966-4c83-90f0-2ec498d3f9f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021298711s STEP: Saw pod success Mar 22 14:16:45.476: INFO: Pod "downwardapi-volume-02a2b63f-d966-4c83-90f0-2ec498d3f9f9" satisfied condition "success or failure" Mar 22 14:16:45.479: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-02a2b63f-d966-4c83-90f0-2ec498d3f9f9 container client-container: STEP: delete the pod Mar 22 14:16:45.507: INFO: Waiting for pod downwardapi-volume-02a2b63f-d966-4c83-90f0-2ec498d3f9f9 to disappear Mar 22 14:16:45.517: INFO: Pod downwardapi-volume-02a2b63f-d966-4c83-90f0-2ec498d3f9f9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:16:45.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3047" for this suite. Mar 22 14:16:51.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:16:51.647: INFO: namespace projected-3047 deletion completed in 6.127384865s • [SLOW TEST:10.268 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:16:51.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 22 14:16:59.770: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 14:16:59.775: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 14:17:01.776: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 14:17:01.779: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 14:17:03.776: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 14:17:03.779: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:17:03.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4777" for this suite. Mar 22 14:17:25.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:17:25.879: INFO: namespace container-lifecycle-hook-4777 deletion completed in 22.089696999s • [SLOW TEST:34.232 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:17:25.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 14:17:25.987: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 22 14:17:30.992: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 22 14:17:30.992: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 22 14:17:32.996: INFO: Creating deployment "test-rollover-deployment" Mar 22 14:17:33.004: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 22 14:17:35.030: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 22 14:17:35.036: INFO: Ensure that both replica sets have 1 created replica Mar 22 14:17:35.041: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 22 14:17:35.047: INFO: Updating deployment test-rollover-deployment Mar 22 14:17:35.047: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 22 14:17:37.055: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 22 14:17:37.062: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 22 14:17:37.067: INFO: all replica sets need to contain the pod-template-hash label Mar 22 14:17:37.067: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483453, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483453, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483455, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483453, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 14:17:39.076: INFO: all replica sets need to contain the pod-template-hash label Mar 22 14:17:39.076: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483453, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483453, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483458, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483453, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 14:17:41.076: INFO: all replica sets need to contain the pod-template-hash label Mar 22 14:17:41.076: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483453, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483453, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483458, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483453, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 14:17:43.074: INFO: all replica sets need to contain the pod-template-hash label Mar 22 14:17:43.074: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483453, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483453, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483458, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483453, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 14:17:45.075: INFO: all replica sets need to contain the pod-template-hash label Mar 22 14:17:45.075: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483453, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483453, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483458, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483453, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 14:17:47.076: INFO: all replica sets need to contain the pod-template-hash label Mar 22 14:17:47.076: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483453, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483453, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483458, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483453, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 14:17:49.075: INFO: Mar 22 14:17:49.075: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 22 14:17:49.084: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-2076,SelfLink:/apis/apps/v1/namespaces/deployment-2076/deployments/test-rollover-deployment,UID:d4c82ea1-0ced-4213-a19d-1e66ecd8af4d,ResourceVersion:1250928,Generation:2,CreationTimestamp:2020-03-22 14:17:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-22 14:17:33 +0000 UTC 2020-03-22 14:17:33 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-22 14:17:48 +0000 UTC 2020-03-22 14:17:33 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 22 14:17:49.086: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-2076,SelfLink:/apis/apps/v1/namespaces/deployment-2076/replicasets/test-rollover-deployment-854595fc44,UID:645a3e48-c7e1-43eb-8640-fe762ad912f5,ResourceVersion:1250916,Generation:2,CreationTimestamp:2020-03-22 14:17:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d4c82ea1-0ced-4213-a19d-1e66ecd8af4d 0xc002cf5117 0xc002cf5118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 22 14:17:49.086: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 22 14:17:49.086: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-2076,SelfLink:/apis/apps/v1/namespaces/deployment-2076/replicasets/test-rollover-controller,UID:82cb48e9-94d1-4738-8dce-fb104295aac8,ResourceVersion:1250927,Generation:2,CreationTimestamp:2020-03-22 14:17:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d4c82ea1-0ced-4213-a19d-1e66ecd8af4d 0xc002cf502f 0xc002cf5040}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 22 14:17:49.086: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-2076,SelfLink:/apis/apps/v1/namespaces/deployment-2076/replicasets/test-rollover-deployment-9b8b997cf,UID:68a106cb-c0c4-4265-a865-4aa31c0c8574,ResourceVersion:1250877,Generation:2,CreationTimestamp:2020-03-22 14:17:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d4c82ea1-0ced-4213-a19d-1e66ecd8af4d 0xc002cf51e0 0xc002cf51e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 22 14:17:49.089: INFO: Pod "test-rollover-deployment-854595fc44-srdhn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-srdhn,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-2076,SelfLink:/api/v1/namespaces/deployment-2076/pods/test-rollover-deployment-854595fc44-srdhn,UID:08c694a9-bf4e-4aca-aa15-a97c4341de7b,ResourceVersion:1250894,Generation:0,CreationTimestamp:2020-03-22 14:17:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 645a3e48-c7e1-43eb-8640-fe762ad912f5 0xc002cf5f27 0xc002cf5f28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pj9dw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pj9dw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-pj9dw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cf5fa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cf5fc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:17:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:17:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:17:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:17:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.46,StartTime:2020-03-22 14:17:35 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-22 14:17:37 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://ec02a007146df3bf77b1eafc37af325f995f164b34204ebb9165fb5a5723a531}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:17:49.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2076" for this suite. Mar 22 14:17:55.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:17:55.214: INFO: namespace deployment-2076 deletion completed in 6.122234986s • [SLOW TEST:29.335 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:17:55.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6835 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 22 14:17:55.261: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 22 14:18:21.423: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.150:8080/dial?request=hostName&protocol=udp&host=10.244.1.47&port=8081&tries=1'] Namespace:pod-network-test-6835 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 14:18:21.423: INFO: >>> kubeConfig: /root/.kube/config I0322 14:18:21.459611 6 log.go:172] (0xc000e9e420) (0xc0027cc500) Create stream I0322 14:18:21.459642 6 log.go:172] (0xc000e9e420) (0xc0027cc500) Stream added, broadcasting: 1 I0322 14:18:21.462138 6 log.go:172] (0xc000e9e420) Reply frame received for 1 I0322 14:18:21.462180 6 log.go:172] (0xc000e9e420) (0xc003304000) Create stream I0322 14:18:21.462194 6 log.go:172] (0xc000e9e420) (0xc003304000) Stream added, broadcasting: 3 I0322 14:18:21.463346 6 log.go:172] (0xc000e9e420) Reply frame received for 3 I0322 14:18:21.463383 6 log.go:172] (0xc000e9e420) (0xc0027cc5a0) Create stream I0322 14:18:21.463396 6 log.go:172] (0xc000e9e420) (0xc0027cc5a0) Stream added, broadcasting: 5 I0322 14:18:21.464220 6 log.go:172] (0xc000e9e420) Reply frame received for 5 I0322 14:18:21.559629 6 log.go:172] (0xc000e9e420) Data frame received for 3 I0322 14:18:21.559672 6 log.go:172] (0xc003304000) (3) Data frame handling I0322 14:18:21.559700 6 log.go:172] (0xc003304000) (3) Data frame sent I0322 14:18:21.560532 6 log.go:172] (0xc000e9e420) Data frame received for 5 I0322 14:18:21.560563 6 log.go:172] (0xc0027cc5a0) (5) Data frame handling I0322 14:18:21.560810 6 log.go:172] (0xc000e9e420) Data frame received for 3 I0322 14:18:21.560841 6 log.go:172] (0xc003304000) (3) Data frame handling I0322 14:18:21.562836 6 log.go:172] (0xc000e9e420) Data frame received for 1 I0322 14:18:21.562873 6 log.go:172] (0xc0027cc500) (1) Data frame handling I0322 14:18:21.562890 6 log.go:172] (0xc0027cc500) (1) Data frame sent I0322 14:18:21.562925 6 log.go:172] (0xc000e9e420) (0xc0027cc500) Stream removed, broadcasting: 1 I0322 14:18:21.562983 6 log.go:172] (0xc000e9e420) Go away received I0322 14:18:21.563017 6 log.go:172] (0xc000e9e420) (0xc0027cc500) Stream removed, broadcasting: 1 I0322 14:18:21.563032 6 log.go:172] (0xc000e9e420) (0xc003304000) Stream removed, broadcasting: 3 I0322 14:18:21.563047 6 log.go:172] (0xc000e9e420) (0xc0027cc5a0) Stream removed, broadcasting: 5 Mar 22 14:18:21.563: INFO: Waiting for endpoints: map[] Mar 22 14:18:21.567: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.150:8080/dial?request=hostName&protocol=udp&host=10.244.2.149&port=8081&tries=1'] Namespace:pod-network-test-6835 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 22 14:18:21.567: INFO: >>> kubeConfig: /root/.kube/config I0322 14:18:21.601324 6 log.go:172] (0xc0018dab00) (0xc0031174a0) Create stream I0322 14:18:21.601405 6 log.go:172] (0xc0018dab00) (0xc0031174a0) Stream added, broadcasting: 1 I0322 14:18:21.605734 6 log.go:172] (0xc0018dab00) Reply frame received for 1 I0322 14:18:21.605790 6 log.go:172] (0xc0018dab00) (0xc001d64280) Create stream I0322 14:18:21.605809 6 log.go:172] (0xc0018dab00) (0xc001d64280) Stream added, broadcasting: 3 I0322 14:18:21.607898 6 log.go:172] (0xc0018dab00) Reply frame received for 3 I0322 14:18:21.607930 6 log.go:172] (0xc0018dab00) (0xc0027cc640) Create stream I0322 14:18:21.607942 6 log.go:172] (0xc0018dab00) (0xc0027cc640) Stream added, broadcasting: 5 I0322 14:18:21.609457 6 log.go:172] (0xc0018dab00) Reply frame received for 5 I0322 14:18:21.661813 6 log.go:172] (0xc0018dab00) Data frame received for 3 I0322 14:18:21.661842 6 log.go:172] (0xc001d64280) (3) Data frame handling I0322 14:18:21.661861 6 log.go:172] (0xc001d64280) (3) Data frame sent I0322 14:18:21.662740 6 log.go:172] (0xc0018dab00) Data frame received for 5 I0322 14:18:21.662775 6 log.go:172] (0xc0018dab00) Data frame received for 3 I0322 14:18:21.662802 6 log.go:172] (0xc001d64280) (3) Data frame handling I0322 14:18:21.662855 6 log.go:172] (0xc0027cc640) (5) Data frame handling I0322 14:18:21.664378 6 log.go:172] (0xc0018dab00) Data frame received for 1 I0322 14:18:21.664395 6 log.go:172] (0xc0031174a0) (1) Data frame handling I0322 14:18:21.664403 6 log.go:172] (0xc0031174a0) (1) Data frame sent I0322 14:18:21.664415 6 log.go:172] (0xc0018dab00) (0xc0031174a0) Stream removed, broadcasting: 1 I0322 14:18:21.664541 6 log.go:172] (0xc0018dab00) Go away received I0322 14:18:21.664670 6 log.go:172] (0xc0018dab00) (0xc0031174a0) Stream removed, broadcasting: 1 I0322 14:18:21.664703 6 log.go:172] (0xc0018dab00) (0xc001d64280) Stream removed, broadcasting: 3 I0322 14:18:21.664725 6 log.go:172] (0xc0018dab00) (0xc0027cc640) Stream removed, broadcasting: 5 Mar 22 14:18:21.664: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:18:21.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6835" for this suite. Mar 22 14:18:43.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:18:43.767: INFO: namespace pod-network-test-6835 deletion completed in 22.098588384s • [SLOW TEST:48.553 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:18:43.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-027c30ab-ce8f-4a07-98d8-c82489d170a4 STEP: Creating a pod to test consume configMaps Mar 22 14:18:43.883: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f00fb37f-515a-4a5b-a6bb-c6d371316ffa" in namespace "projected-321" to be "success or failure" Mar 22 14:18:43.891: INFO: Pod "pod-projected-configmaps-f00fb37f-515a-4a5b-a6bb-c6d371316ffa": Phase="Pending", Reason="", readiness=false. Elapsed: 7.994258ms Mar 22 14:18:45.895: INFO: Pod "pod-projected-configmaps-f00fb37f-515a-4a5b-a6bb-c6d371316ffa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012442336s Mar 22 14:18:47.899: INFO: Pod "pod-projected-configmaps-f00fb37f-515a-4a5b-a6bb-c6d371316ffa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016443937s STEP: Saw pod success Mar 22 14:18:47.899: INFO: Pod "pod-projected-configmaps-f00fb37f-515a-4a5b-a6bb-c6d371316ffa" satisfied condition "success or failure" Mar 22 14:18:47.902: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-f00fb37f-515a-4a5b-a6bb-c6d371316ffa container projected-configmap-volume-test: STEP: delete the pod Mar 22 14:18:47.923: INFO: Waiting for pod pod-projected-configmaps-f00fb37f-515a-4a5b-a6bb-c6d371316ffa to disappear Mar 22 14:18:47.955: INFO: Pod pod-projected-configmaps-f00fb37f-515a-4a5b-a6bb-c6d371316ffa no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:18:47.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-321" for this suite. Mar 22 14:18:53.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:18:54.057: INFO: namespace projected-321 deletion completed in 6.098023411s • [SLOW TEST:10.290 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:18:54.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 22 14:18:54.138: INFO: Waiting up to 5m0s for pod "downward-api-1bf5c85a-33aa-4e7d-9199-de3c36b8716b" in namespace "downward-api-4354" to be "success or failure" Mar 22 14:18:54.141: INFO: Pod "downward-api-1bf5c85a-33aa-4e7d-9199-de3c36b8716b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.584457ms Mar 22 14:18:56.145: INFO: Pod "downward-api-1bf5c85a-33aa-4e7d-9199-de3c36b8716b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006298001s Mar 22 14:18:58.152: INFO: Pod "downward-api-1bf5c85a-33aa-4e7d-9199-de3c36b8716b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013629227s STEP: Saw pod success Mar 22 14:18:58.152: INFO: Pod "downward-api-1bf5c85a-33aa-4e7d-9199-de3c36b8716b" satisfied condition "success or failure" Mar 22 14:18:58.155: INFO: Trying to get logs from node iruya-worker2 pod downward-api-1bf5c85a-33aa-4e7d-9199-de3c36b8716b container dapi-container: STEP: delete the pod Mar 22 14:18:58.173: INFO: Waiting for pod downward-api-1bf5c85a-33aa-4e7d-9199-de3c36b8716b to disappear Mar 22 14:18:58.178: INFO: Pod downward-api-1bf5c85a-33aa-4e7d-9199-de3c36b8716b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:18:58.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4354" for this suite. Mar 22 14:19:04.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:19:04.273: INFO: namespace downward-api-4354 deletion completed in 6.092230458s • [SLOW TEST:10.216 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:19:04.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 22 14:19:04.350: INFO: Waiting up to 5m0s for pod "downwardapi-volume-627f24d1-fe18-4a24-b854-a4d67f4973ef" in namespace "projected-8238" to be "success or failure" Mar 22 14:19:04.354: INFO: Pod "downwardapi-volume-627f24d1-fe18-4a24-b854-a4d67f4973ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008892ms Mar 22 14:19:06.359: INFO: Pod "downwardapi-volume-627f24d1-fe18-4a24-b854-a4d67f4973ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008670232s Mar 22 14:19:08.363: INFO: Pod "downwardapi-volume-627f24d1-fe18-4a24-b854-a4d67f4973ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013088632s STEP: Saw pod success Mar 22 14:19:08.364: INFO: Pod "downwardapi-volume-627f24d1-fe18-4a24-b854-a4d67f4973ef" satisfied condition "success or failure" Mar 22 14:19:08.366: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-627f24d1-fe18-4a24-b854-a4d67f4973ef container client-container: STEP: delete the pod Mar 22 14:19:08.511: INFO: Waiting for pod downwardapi-volume-627f24d1-fe18-4a24-b854-a4d67f4973ef to disappear Mar 22 14:19:08.550: INFO: Pod downwardapi-volume-627f24d1-fe18-4a24-b854-a4d67f4973ef no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:19:08.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8238" for this suite. Mar 22 14:19:14.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:19:14.662: INFO: namespace projected-8238 deletion completed in 6.107302096s • [SLOW TEST:10.388 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:19:14.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 22 14:19:18.786: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 22 14:19:33.878: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:19:33.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5730" for this suite. Mar 22 14:19:39.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:19:39.986: INFO: namespace pods-5730 deletion completed in 6.102153452s • [SLOW TEST:25.323 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:19:39.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Mar 22 14:19:40.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3485' Mar 22 14:19:43.435: INFO: stderr: "" Mar 22 14:19:43.435: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 22 14:19:43.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3485' Mar 22 14:19:43.519: INFO: stderr: "" Mar 22 14:19:43.519: INFO: stdout: "update-demo-nautilus-d8gl5 update-demo-nautilus-rmg6j " Mar 22 14:19:43.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8gl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3485' Mar 22 14:19:43.612: INFO: stderr: "" Mar 22 14:19:43.612: INFO: stdout: "" Mar 22 14:19:43.612: INFO: update-demo-nautilus-d8gl5 is created but not running Mar 22 14:19:48.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3485' Mar 22 14:19:48.710: INFO: stderr: "" Mar 22 14:19:48.710: INFO: stdout: "update-demo-nautilus-d8gl5 update-demo-nautilus-rmg6j " Mar 22 14:19:48.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8gl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3485' Mar 22 14:19:48.805: INFO: stderr: "" Mar 22 14:19:48.805: INFO: stdout: "true" Mar 22 14:19:48.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8gl5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3485' Mar 22 14:19:48.900: INFO: stderr: "" Mar 22 14:19:48.900: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 22 14:19:48.900: INFO: validating pod update-demo-nautilus-d8gl5 Mar 22 14:19:48.905: INFO: got data: { "image": "nautilus.jpg" } Mar 22 14:19:48.905: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 22 14:19:48.905: INFO: update-demo-nautilus-d8gl5 is verified up and running Mar 22 14:19:48.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rmg6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3485' Mar 22 14:19:49.002: INFO: stderr: "" Mar 22 14:19:49.002: INFO: stdout: "true" Mar 22 14:19:49.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rmg6j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3485' Mar 22 14:19:49.088: INFO: stderr: "" Mar 22 14:19:49.088: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 22 14:19:49.088: INFO: validating pod update-demo-nautilus-rmg6j Mar 22 14:19:49.093: INFO: got data: { "image": "nautilus.jpg" } Mar 22 14:19:49.093: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 22 14:19:49.093: INFO: update-demo-nautilus-rmg6j is verified up and running STEP: scaling down the replication controller Mar 22 14:19:49.094: INFO: scanned /root for discovery docs: Mar 22 14:19:49.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3485' Mar 22 14:19:50.268: INFO: stderr: "" Mar 22 14:19:50.268: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 22 14:19:50.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3485' Mar 22 14:19:50.359: INFO: stderr: "" Mar 22 14:19:50.359: INFO: stdout: "update-demo-nautilus-d8gl5 update-demo-nautilus-rmg6j " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 22 14:19:55.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3485' Mar 22 14:19:55.456: INFO: stderr: "" Mar 22 14:19:55.456: INFO: stdout: "update-demo-nautilus-d8gl5 update-demo-nautilus-rmg6j " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 22 14:20:00.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3485' Mar 22 14:20:00.554: INFO: stderr: "" Mar 22 14:20:00.554: INFO: stdout: "update-demo-nautilus-d8gl5 update-demo-nautilus-rmg6j " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 22 14:20:05.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3485' Mar 22 14:20:05.656: INFO: stderr: "" Mar 22 14:20:05.656: INFO: stdout: "update-demo-nautilus-d8gl5 " Mar 22 14:20:05.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8gl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3485' Mar 22 14:20:05.743: INFO: stderr: "" Mar 22 14:20:05.743: INFO: stdout: "true" Mar 22 14:20:05.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8gl5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3485' Mar 22 14:20:05.850: INFO: stderr: "" Mar 22 14:20:05.850: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 22 14:20:05.850: INFO: validating pod update-demo-nautilus-d8gl5 Mar 22 14:20:05.853: INFO: got data: { "image": "nautilus.jpg" } Mar 22 14:20:05.853: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 22 14:20:05.853: INFO: update-demo-nautilus-d8gl5 is verified up and running STEP: scaling up the replication controller Mar 22 14:20:05.855: INFO: scanned /root for discovery docs: Mar 22 14:20:05.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3485' Mar 22 14:20:06.988: INFO: stderr: "" Mar 22 14:20:06.988: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 22 14:20:06.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3485' Mar 22 14:20:07.086: INFO: stderr: "" Mar 22 14:20:07.086: INFO: stdout: "update-demo-nautilus-d8gl5 update-demo-nautilus-wz9k9 " Mar 22 14:20:07.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8gl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3485' Mar 22 14:20:07.179: INFO: stderr: "" Mar 22 14:20:07.179: INFO: stdout: "true" Mar 22 14:20:07.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8gl5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3485' Mar 22 14:20:07.416: INFO: stderr: "" Mar 22 14:20:07.416: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 22 14:20:07.416: INFO: validating pod update-demo-nautilus-d8gl5 Mar 22 14:20:07.421: INFO: got data: { "image": "nautilus.jpg" } Mar 22 14:20:07.421: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 22 14:20:07.421: INFO: update-demo-nautilus-d8gl5 is verified up and running Mar 22 14:20:07.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wz9k9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3485' Mar 22 14:20:07.520: INFO: stderr: "" Mar 22 14:20:07.520: INFO: stdout: "" Mar 22 14:20:07.520: INFO: update-demo-nautilus-wz9k9 is created but not running Mar 22 14:20:12.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3485' Mar 22 14:20:12.613: INFO: stderr: "" Mar 22 14:20:12.614: INFO: stdout: "update-demo-nautilus-d8gl5 update-demo-nautilus-wz9k9 " Mar 22 14:20:12.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8gl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3485' Mar 22 14:20:12.723: INFO: stderr: "" Mar 22 14:20:12.723: INFO: stdout: "true" Mar 22 14:20:12.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8gl5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3485' Mar 22 14:20:12.817: INFO: stderr: "" Mar 22 14:20:12.817: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 22 14:20:12.817: INFO: validating pod update-demo-nautilus-d8gl5 Mar 22 14:20:12.820: INFO: got data: { "image": "nautilus.jpg" } Mar 22 14:20:12.820: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 22 14:20:12.820: INFO: update-demo-nautilus-d8gl5 is verified up and running Mar 22 14:20:12.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wz9k9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3485' Mar 22 14:20:12.948: INFO: stderr: "" Mar 22 14:20:12.948: INFO: stdout: "true" Mar 22 14:20:12.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wz9k9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3485' Mar 22 14:20:13.049: INFO: stderr: "" Mar 22 14:20:13.049: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 22 14:20:13.049: INFO: validating pod update-demo-nautilus-wz9k9 Mar 22 14:20:13.054: INFO: got data: { "image": "nautilus.jpg" } Mar 22 14:20:13.054: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 22 14:20:13.054: INFO: update-demo-nautilus-wz9k9 is verified up and running STEP: using delete to clean up resources Mar 22 14:20:13.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3485' Mar 22 14:20:13.161: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 22 14:20:13.161: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 22 14:20:13.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3485' Mar 22 14:20:13.275: INFO: stderr: "No resources found.\n" Mar 22 14:20:13.275: INFO: stdout: "" Mar 22 14:20:13.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3485 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 22 14:20:13.414: INFO: stderr: "" Mar 22 14:20:13.414: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:20:13.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3485" for this suite. Mar 22 14:20:35.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:20:35.502: INFO: namespace kubectl-3485 deletion completed in 22.084280391s • [SLOW TEST:55.515 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:20:35.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 22 14:20:35.565: INFO: Waiting up to 5m0s for pod "pod-cd55e50d-403b-49b3-9fd7-2554bcac944f" in namespace "emptydir-4349" to be "success or failure" Mar 22 14:20:35.568: INFO: Pod "pod-cd55e50d-403b-49b3-9fd7-2554bcac944f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.570241ms Mar 22 14:20:37.572: INFO: Pod "pod-cd55e50d-403b-49b3-9fd7-2554bcac944f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006820542s Mar 22 14:20:39.576: INFO: Pod "pod-cd55e50d-403b-49b3-9fd7-2554bcac944f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011184106s STEP: Saw pod success Mar 22 14:20:39.576: INFO: Pod "pod-cd55e50d-403b-49b3-9fd7-2554bcac944f" satisfied condition "success or failure" Mar 22 14:20:39.579: INFO: Trying to get logs from node iruya-worker2 pod pod-cd55e50d-403b-49b3-9fd7-2554bcac944f container test-container: STEP: delete the pod Mar 22 14:20:39.639: INFO: Waiting for pod pod-cd55e50d-403b-49b3-9fd7-2554bcac944f to disappear Mar 22 14:20:39.645: INFO: Pod pod-cd55e50d-403b-49b3-9fd7-2554bcac944f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:20:39.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4349" for this suite. Mar 22 14:20:45.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:20:45.747: INFO: namespace emptydir-4349 deletion completed in 6.097877041s • [SLOW TEST:10.245 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:20:45.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-7f84e524-fd08-4b2f-b1e1-336a4439cc3b STEP: Creating a pod to test consume secrets Mar 22 14:20:45.825: INFO: Waiting up to 5m0s for pod "pod-secrets-431214d7-0dbc-4ce3-94a4-9a41d5be510d" in namespace "secrets-3819" to be "success or failure" Mar 22 14:20:45.828: INFO: Pod "pod-secrets-431214d7-0dbc-4ce3-94a4-9a41d5be510d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.265106ms Mar 22 14:20:47.859: INFO: Pod "pod-secrets-431214d7-0dbc-4ce3-94a4-9a41d5be510d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033852874s Mar 22 14:20:49.863: INFO: Pod "pod-secrets-431214d7-0dbc-4ce3-94a4-9a41d5be510d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037765448s STEP: Saw pod success Mar 22 14:20:49.863: INFO: Pod "pod-secrets-431214d7-0dbc-4ce3-94a4-9a41d5be510d" satisfied condition "success or failure" Mar 22 14:20:49.866: INFO: Trying to get logs from node iruya-worker pod pod-secrets-431214d7-0dbc-4ce3-94a4-9a41d5be510d container secret-volume-test: STEP: delete the pod Mar 22 14:20:49.890: INFO: Waiting for pod pod-secrets-431214d7-0dbc-4ce3-94a4-9a41d5be510d to disappear Mar 22 14:20:49.922: INFO: Pod pod-secrets-431214d7-0dbc-4ce3-94a4-9a41d5be510d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:20:49.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3819" for this suite. Mar 22 14:20:55.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:20:56.028: INFO: namespace secrets-3819 deletion completed in 6.101204518s • [SLOW TEST:10.281 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:20:56.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 14:20:56.088: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:20:57.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5443" for this suite. Mar 22 14:21:03.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:21:03.269: INFO: namespace custom-resource-definition-5443 deletion completed in 6.083207134s • [SLOW TEST:7.242 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:21:03.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4538 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Mar 22 14:21:03.335: INFO: Found 0 stateful pods, waiting for 3 Mar 22 14:21:13.340: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 22 14:21:13.341: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 22 14:21:13.341: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 22 14:21:13.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4538 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 22 14:21:13.684: INFO: stderr: "I0322 14:21:13.564598 2361 log.go:172] (0xc0009684d0) (0xc00069c820) Create stream\nI0322 14:21:13.564646 2361 log.go:172] (0xc0009684d0) (0xc00069c820) Stream added, broadcasting: 1\nI0322 14:21:13.567273 2361 log.go:172] (0xc0009684d0) Reply frame received for 1\nI0322 14:21:13.567329 2361 log.go:172] (0xc0009684d0) (0xc000876000) Create stream\nI0322 14:21:13.567352 2361 log.go:172] (0xc0009684d0) (0xc000876000) Stream added, broadcasting: 3\nI0322 14:21:13.568249 2361 log.go:172] (0xc0009684d0) Reply frame received for 3\nI0322 14:21:13.568289 2361 log.go:172] (0xc0009684d0) (0xc00069c8c0) Create stream\nI0322 14:21:13.568316 2361 log.go:172] (0xc0009684d0) (0xc00069c8c0) Stream added, broadcasting: 5\nI0322 14:21:13.569221 2361 log.go:172] (0xc0009684d0) Reply frame received for 5\nI0322 14:21:13.647730 2361 log.go:172] (0xc0009684d0) Data frame received for 5\nI0322 14:21:13.647864 2361 log.go:172] (0xc00069c8c0) (5) Data frame handling\nI0322 14:21:13.647910 2361 log.go:172] (0xc00069c8c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0322 14:21:13.676358 2361 log.go:172] (0xc0009684d0) Data frame received for 3\nI0322 14:21:13.676406 2361 log.go:172] (0xc000876000) (3) Data frame handling\nI0322 14:21:13.676478 2361 log.go:172] (0xc000876000) (3) Data frame sent\nI0322 14:21:13.676723 2361 log.go:172] (0xc0009684d0) Data frame received for 5\nI0322 14:21:13.676763 2361 log.go:172] (0xc00069c8c0) (5) Data frame handling\nI0322 14:21:13.676788 2361 log.go:172] (0xc0009684d0) Data frame received for 3\nI0322 14:21:13.676858 2361 log.go:172] (0xc000876000) (3) Data frame handling\nI0322 14:21:13.678941 2361 log.go:172] (0xc0009684d0) Data frame received for 1\nI0322 14:21:13.678965 2361 log.go:172] (0xc00069c820) (1) Data frame handling\nI0322 14:21:13.678979 2361 log.go:172] (0xc00069c820) (1) Data frame sent\nI0322 14:21:13.678994 2361 log.go:172] (0xc0009684d0) (0xc00069c820) Stream removed, broadcasting: 1\nI0322 14:21:13.679010 2361 log.go:172] (0xc0009684d0) Go away received\nI0322 14:21:13.679548 2361 log.go:172] (0xc0009684d0) (0xc00069c820) Stream removed, broadcasting: 1\nI0322 14:21:13.679580 2361 log.go:172] (0xc0009684d0) (0xc000876000) Stream removed, broadcasting: 3\nI0322 14:21:13.679592 2361 log.go:172] (0xc0009684d0) (0xc00069c8c0) Stream removed, broadcasting: 5\n" Mar 22 14:21:13.684: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 22 14:21:13.684: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 22 14:21:23.725: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 22 14:21:33.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4538 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:21:33.974: INFO: stderr: "I0322 14:21:33.874297 2385 log.go:172] (0xc000ada420) (0xc0005fe6e0) Create stream\nI0322 14:21:33.874346 2385 log.go:172] (0xc000ada420) (0xc0005fe6e0) Stream added, broadcasting: 1\nI0322 14:21:33.876523 2385 log.go:172] (0xc000ada420) Reply frame received for 1\nI0322 14:21:33.876574 2385 log.go:172] (0xc000ada420) (0xc0005fe000) Create stream\nI0322 14:21:33.876593 2385 log.go:172] (0xc000ada420) (0xc0005fe000) Stream added, broadcasting: 3\nI0322 14:21:33.883277 2385 log.go:172] (0xc000ada420) Reply frame received for 3\nI0322 14:21:33.883320 2385 log.go:172] (0xc000ada420) (0xc0005fe0a0) Create stream\nI0322 14:21:33.883329 2385 log.go:172] (0xc000ada420) (0xc0005fe0a0) Stream added, broadcasting: 5\nI0322 14:21:33.884241 2385 log.go:172] (0xc000ada420) Reply frame received for 5\nI0322 14:21:33.967177 2385 log.go:172] (0xc000ada420) Data frame received for 3\nI0322 14:21:33.967241 2385 log.go:172] (0xc0005fe000) (3) Data frame handling\nI0322 14:21:33.967267 2385 log.go:172] (0xc0005fe000) (3) Data frame sent\nI0322 14:21:33.967286 2385 log.go:172] (0xc000ada420) Data frame received for 3\nI0322 14:21:33.967303 2385 log.go:172] (0xc0005fe000) (3) Data frame handling\nI0322 14:21:33.967492 2385 log.go:172] (0xc000ada420) Data frame received for 5\nI0322 14:21:33.967513 2385 log.go:172] (0xc0005fe0a0) (5) Data frame handling\nI0322 14:21:33.967532 2385 log.go:172] (0xc0005fe0a0) (5) Data frame sent\nI0322 14:21:33.967546 2385 log.go:172] (0xc000ada420) Data frame received for 5\nI0322 14:21:33.967555 2385 log.go:172] (0xc0005fe0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0322 14:21:33.968849 2385 log.go:172] (0xc000ada420) Data frame received for 1\nI0322 14:21:33.968873 2385 log.go:172] (0xc0005fe6e0) (1) Data frame handling\nI0322 14:21:33.968890 2385 log.go:172] (0xc0005fe6e0) (1) Data frame sent\nI0322 14:21:33.968909 2385 log.go:172] (0xc000ada420) (0xc0005fe6e0) Stream removed, broadcasting: 1\nI0322 14:21:33.968978 2385 log.go:172] (0xc000ada420) Go away received\nI0322 14:21:33.969508 2385 log.go:172] (0xc000ada420) (0xc0005fe6e0) Stream removed, broadcasting: 1\nI0322 14:21:33.969539 2385 log.go:172] (0xc000ada420) (0xc0005fe000) Stream removed, broadcasting: 3\nI0322 14:21:33.969557 2385 log.go:172] (0xc000ada420) (0xc0005fe0a0) Stream removed, broadcasting: 5\n" Mar 22 14:21:33.974: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 22 14:21:33.974: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 22 14:21:43.996: INFO: Waiting for StatefulSet statefulset-4538/ss2 to complete update Mar 22 14:21:43.996: INFO: Waiting for Pod statefulset-4538/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 22 14:21:43.996: INFO: Waiting for Pod statefulset-4538/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 22 14:21:54.050: INFO: Waiting for StatefulSet statefulset-4538/ss2 to complete update STEP: Rolling back to a previous revision Mar 22 14:22:04.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4538 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 22 14:22:04.285: INFO: stderr: "I0322 14:22:04.147189 2405 log.go:172] (0xc0009c4580) (0xc00027a820) Create stream\nI0322 14:22:04.147248 2405 log.go:172] (0xc0009c4580) (0xc00027a820) Stream added, broadcasting: 1\nI0322 14:22:04.154333 2405 log.go:172] (0xc0009c4580) Reply frame received for 1\nI0322 14:22:04.154384 2405 log.go:172] (0xc0009c4580) (0xc0003a2320) Create stream\nI0322 14:22:04.154394 2405 log.go:172] (0xc0009c4580) (0xc0003a2320) Stream added, broadcasting: 3\nI0322 14:22:04.155842 2405 log.go:172] (0xc0009c4580) Reply frame received for 3\nI0322 14:22:04.155915 2405 log.go:172] (0xc0009c4580) (0xc00027a000) Create stream\nI0322 14:22:04.155926 2405 log.go:172] (0xc0009c4580) (0xc00027a000) Stream added, broadcasting: 5\nI0322 14:22:04.156748 2405 log.go:172] (0xc0009c4580) Reply frame received for 5\nI0322 14:22:04.247465 2405 log.go:172] (0xc0009c4580) Data frame received for 5\nI0322 14:22:04.247513 2405 log.go:172] (0xc00027a000) (5) Data frame handling\nI0322 14:22:04.247535 2405 log.go:172] (0xc00027a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0322 14:22:04.277712 2405 log.go:172] (0xc0009c4580) Data frame received for 5\nI0322 14:22:04.277758 2405 log.go:172] (0xc00027a000) (5) Data frame handling\nI0322 14:22:04.277802 2405 log.go:172] (0xc0009c4580) Data frame received for 3\nI0322 14:22:04.277818 2405 log.go:172] (0xc0003a2320) (3) Data frame handling\nI0322 14:22:04.277836 2405 log.go:172] (0xc0003a2320) (3) Data frame sent\nI0322 14:22:04.278146 2405 log.go:172] (0xc0009c4580) Data frame received for 3\nI0322 14:22:04.278180 2405 log.go:172] (0xc0003a2320) (3) Data frame handling\nI0322 14:22:04.280006 2405 log.go:172] (0xc0009c4580) Data frame received for 1\nI0322 14:22:04.280090 2405 log.go:172] (0xc00027a820) (1) Data frame handling\nI0322 14:22:04.280126 2405 log.go:172] (0xc00027a820) (1) Data frame sent\nI0322 14:22:04.280170 2405 log.go:172] (0xc0009c4580) (0xc00027a820) Stream removed, broadcasting: 1\nI0322 14:22:04.280214 2405 log.go:172] (0xc0009c4580) Go away received\nI0322 14:22:04.280750 2405 log.go:172] (0xc0009c4580) (0xc00027a820) Stream removed, broadcasting: 1\nI0322 14:22:04.280774 2405 log.go:172] (0xc0009c4580) (0xc0003a2320) Stream removed, broadcasting: 3\nI0322 14:22:04.280785 2405 log.go:172] (0xc0009c4580) (0xc00027a000) Stream removed, broadcasting: 5\n" Mar 22 14:22:04.285: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 22 14:22:04.285: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 22 14:22:14.317: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 22 14:22:24.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4538 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:22:24.588: INFO: stderr: "I0322 14:22:24.488523 2427 log.go:172] (0xc000a70630) (0xc000614a00) Create stream\nI0322 14:22:24.488584 2427 log.go:172] (0xc000a70630) (0xc000614a00) Stream added, broadcasting: 1\nI0322 14:22:24.492094 2427 log.go:172] (0xc000a70630) Reply frame received for 1\nI0322 14:22:24.492141 2427 log.go:172] (0xc000a70630) (0xc000614140) Create stream\nI0322 14:22:24.492154 2427 log.go:172] (0xc000a70630) (0xc000614140) Stream added, broadcasting: 3\nI0322 14:22:24.493257 2427 log.go:172] (0xc000a70630) Reply frame received for 3\nI0322 14:22:24.493305 2427 log.go:172] (0xc000a70630) (0xc000180000) Create stream\nI0322 14:22:24.493317 2427 log.go:172] (0xc000a70630) (0xc000180000) Stream added, broadcasting: 5\nI0322 14:22:24.494481 2427 log.go:172] (0xc000a70630) Reply frame received for 5\nI0322 14:22:24.581461 2427 log.go:172] (0xc000a70630) Data frame received for 3\nI0322 14:22:24.581510 2427 log.go:172] (0xc000614140) (3) Data frame handling\nI0322 14:22:24.581525 2427 log.go:172] (0xc000614140) (3) Data frame sent\nI0322 14:22:24.581551 2427 log.go:172] (0xc000a70630) Data frame received for 5\nI0322 14:22:24.581606 2427 log.go:172] (0xc000180000) (5) Data frame handling\nI0322 14:22:24.581622 2427 log.go:172] (0xc000180000) (5) Data frame sent\nI0322 14:22:24.581739 2427 log.go:172] (0xc000a70630) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0322 14:22:24.581767 2427 log.go:172] (0xc000180000) (5) Data frame handling\nI0322 14:22:24.581817 2427 log.go:172] (0xc000a70630) Data frame received for 3\nI0322 14:22:24.581845 2427 log.go:172] (0xc000614140) (3) Data frame handling\nI0322 14:22:24.583248 2427 log.go:172] (0xc000a70630) Data frame received for 1\nI0322 14:22:24.583280 2427 log.go:172] (0xc000614a00) (1) Data frame handling\nI0322 14:22:24.583298 2427 log.go:172] (0xc000614a00) (1) Data frame sent\nI0322 14:22:24.583459 2427 log.go:172] (0xc000a70630) (0xc000614a00) Stream removed, broadcasting: 1\nI0322 14:22:24.583716 2427 log.go:172] (0xc000a70630) Go away received\nI0322 14:22:24.583840 2427 log.go:172] (0xc000a70630) (0xc000614a00) Stream removed, broadcasting: 1\nI0322 14:22:24.583865 2427 log.go:172] (0xc000a70630) (0xc000614140) Stream removed, broadcasting: 3\nI0322 14:22:24.583879 2427 log.go:172] (0xc000a70630) (0xc000180000) Stream removed, broadcasting: 5\n" Mar 22 14:22:24.588: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 22 14:22:24.588: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 22 14:22:54.628: INFO: Waiting for StatefulSet statefulset-4538/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 22 14:23:04.643: INFO: Deleting all statefulset in ns statefulset-4538 Mar 22 14:23:04.646: INFO: Scaling statefulset ss2 to 0 Mar 22 14:23:24.671: INFO: Waiting for statefulset status.replicas updated to 0 Mar 22 14:23:24.674: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:23:24.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4538" for this suite. Mar 22 14:23:30.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:23:30.803: INFO: namespace statefulset-4538 deletion completed in 6.113869523s • [SLOW TEST:147.533 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:23:30.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 22 14:23:30.861: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e304acc9-81bb-444f-a200-a8dea249b2d8" in namespace "downward-api-7266" to be "success or failure" Mar 22 14:23:30.865: INFO: Pod "downwardapi-volume-e304acc9-81bb-444f-a200-a8dea249b2d8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.157436ms Mar 22 14:23:32.869: INFO: Pod "downwardapi-volume-e304acc9-81bb-444f-a200-a8dea249b2d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007195527s Mar 22 14:23:34.885: INFO: Pod "downwardapi-volume-e304acc9-81bb-444f-a200-a8dea249b2d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023777615s STEP: Saw pod success Mar 22 14:23:34.885: INFO: Pod "downwardapi-volume-e304acc9-81bb-444f-a200-a8dea249b2d8" satisfied condition "success or failure" Mar 22 14:23:34.888: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e304acc9-81bb-444f-a200-a8dea249b2d8 container client-container: STEP: delete the pod Mar 22 14:23:34.924: INFO: Waiting for pod downwardapi-volume-e304acc9-81bb-444f-a200-a8dea249b2d8 to disappear Mar 22 14:23:34.937: INFO: Pod downwardapi-volume-e304acc9-81bb-444f-a200-a8dea249b2d8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:23:34.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7266" for this suite. Mar 22 14:23:40.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:23:41.032: INFO: namespace downward-api-7266 deletion completed in 6.09144668s • [SLOW TEST:10.229 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:23:41.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-9c0c10d3-f696-4dec-88ee-45514fcadbc2 STEP: Creating a pod to test consume configMaps Mar 22 14:23:41.107: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-abd62370-9886-4b26-a9df-c4191f5047bd" in namespace "projected-142" to be "success or failure" Mar 22 14:23:41.111: INFO: Pod "pod-projected-configmaps-abd62370-9886-4b26-a9df-c4191f5047bd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.246733ms Mar 22 14:23:43.115: INFO: Pod "pod-projected-configmaps-abd62370-9886-4b26-a9df-c4191f5047bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007799507s Mar 22 14:23:45.119: INFO: Pod "pod-projected-configmaps-abd62370-9886-4b26-a9df-c4191f5047bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012205768s STEP: Saw pod success Mar 22 14:23:45.120: INFO: Pod "pod-projected-configmaps-abd62370-9886-4b26-a9df-c4191f5047bd" satisfied condition "success or failure" Mar 22 14:23:45.123: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-abd62370-9886-4b26-a9df-c4191f5047bd container projected-configmap-volume-test: STEP: delete the pod Mar 22 14:23:45.142: INFO: Waiting for pod pod-projected-configmaps-abd62370-9886-4b26-a9df-c4191f5047bd to disappear Mar 22 14:23:45.152: INFO: Pod pod-projected-configmaps-abd62370-9886-4b26-a9df-c4191f5047bd no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:23:45.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-142" for this suite. Mar 22 14:23:51.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:23:51.281: INFO: namespace projected-142 deletion completed in 6.125045054s • [SLOW TEST:10.248 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:23:51.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 22 14:23:51.338: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7d21200e-2924-4f9d-8003-0938392dfed0" in namespace "downward-api-6536" to be "success or failure" Mar 22 14:23:51.364: INFO: Pod "downwardapi-volume-7d21200e-2924-4f9d-8003-0938392dfed0": Phase="Pending", Reason="", readiness=false. Elapsed: 26.185941ms Mar 22 14:23:53.368: INFO: Pod "downwardapi-volume-7d21200e-2924-4f9d-8003-0938392dfed0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030514222s Mar 22 14:23:55.373: INFO: Pod "downwardapi-volume-7d21200e-2924-4f9d-8003-0938392dfed0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035413307s STEP: Saw pod success Mar 22 14:23:55.373: INFO: Pod "downwardapi-volume-7d21200e-2924-4f9d-8003-0938392dfed0" satisfied condition "success or failure" Mar 22 14:23:55.376: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7d21200e-2924-4f9d-8003-0938392dfed0 container client-container: STEP: delete the pod Mar 22 14:23:55.398: INFO: Waiting for pod downwardapi-volume-7d21200e-2924-4f9d-8003-0938392dfed0 to disappear Mar 22 14:23:55.413: INFO: Pod downwardapi-volume-7d21200e-2924-4f9d-8003-0938392dfed0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:23:55.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6536" for this suite. Mar 22 14:24:01.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:24:01.513: INFO: namespace downward-api-6536 deletion completed in 6.096146954s • [SLOW TEST:10.232 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:24:01.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 22 14:24:01.560: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:24:07.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1937" for this suite. Mar 22 14:24:13.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:24:13.301: INFO: namespace init-container-1937 deletion completed in 6.093876359s • [SLOW TEST:11.787 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:24:13.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 14:24:13.350: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 22 14:24:13.362: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 22 14:24:18.367: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 22 14:24:18.367: INFO: Creating deployment "test-rolling-update-deployment" Mar 22 14:24:18.372: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 22 14:24:18.377: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 22 14:24:20.385: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 22 14:24:20.388: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483858, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483858, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483858, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720483858, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 14:24:22.392: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 22 14:24:22.402: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-2482,SelfLink:/apis/apps/v1/namespaces/deployment-2482/deployments/test-rolling-update-deployment,UID:cf02bc05-3858-4fdf-8d8a-2b31fed6d1fc,ResourceVersion:1252530,Generation:1,CreationTimestamp:2020-03-22 14:24:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-22 14:24:18 +0000 UTC 2020-03-22 14:24:18 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-22 14:24:21 +0000 UTC 2020-03-22 14:24:18 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 22 14:24:22.405: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-2482,SelfLink:/apis/apps/v1/namespaces/deployment-2482/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:4a838122-810c-49c7-b203-ed0d4991f82a,ResourceVersion:1252519,Generation:1,CreationTimestamp:2020-03-22 14:24:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment cf02bc05-3858-4fdf-8d8a-2b31fed6d1fc 0xc002729d87 0xc002729d88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 22 14:24:22.405: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 22 14:24:22.405: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-2482,SelfLink:/apis/apps/v1/namespaces/deployment-2482/replicasets/test-rolling-update-controller,UID:9812af34-4e33-43ba-90a7-da05c9b53f91,ResourceVersion:1252528,Generation:2,CreationTimestamp:2020-03-22 14:24:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment cf02bc05-3858-4fdf-8d8a-2b31fed6d1fc 0xc002729cb7 0xc002729cb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 22 14:24:22.408: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-fgx5z" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-fgx5z,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-2482,SelfLink:/api/v1/namespaces/deployment-2482/pods/test-rolling-update-deployment-79f6b9d75c-fgx5z,UID:e666baca-5a5b-4932-8f92-104873ab5685,ResourceVersion:1252518,Generation:0,CreationTimestamp:2020-03-22 14:24:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 4a838122-810c-49c7-b203-ed0d4991f82a 0xc003350e27 0xc003350e28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jczgg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jczgg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-jczgg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003350ea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003350ec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:24:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:24:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:24:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:24:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.58,StartTime:2020-03-22 14:24:18 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-22 14:24:20 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://0f4dcdfd218bec8e40e4f8545c8475c0fa18d7943f89314b0cc7471b1f5092ec}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:24:22.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2482" for this suite. Mar 22 14:24:28.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:24:28.499: INFO: namespace deployment-2482 deletion completed in 6.086935079s • [SLOW TEST:15.196 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:24:28.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 22 14:24:28.566: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:24:35.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1381" for this suite. Mar 22 14:24:57.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:24:57.920: INFO: namespace init-container-1381 deletion completed in 22.090419136s • [SLOW TEST:29.421 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:24:57.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Mar 22 14:24:58.006: INFO: Waiting up to 5m0s for pod "client-containers-ad637cc1-5c47-41ed-bc1c-026b3c73c0a0" in namespace "containers-9572" to be "success or failure" Mar 22 14:24:58.010: INFO: Pod "client-containers-ad637cc1-5c47-41ed-bc1c-026b3c73c0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.738683ms Mar 22 14:25:00.014: INFO: Pod "client-containers-ad637cc1-5c47-41ed-bc1c-026b3c73c0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007951296s Mar 22 14:25:02.018: INFO: Pod "client-containers-ad637cc1-5c47-41ed-bc1c-026b3c73c0a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011893006s STEP: Saw pod success Mar 22 14:25:02.018: INFO: Pod "client-containers-ad637cc1-5c47-41ed-bc1c-026b3c73c0a0" satisfied condition "success or failure" Mar 22 14:25:02.022: INFO: Trying to get logs from node iruya-worker2 pod client-containers-ad637cc1-5c47-41ed-bc1c-026b3c73c0a0 container test-container: STEP: delete the pod Mar 22 14:25:02.052: INFO: Waiting for pod client-containers-ad637cc1-5c47-41ed-bc1c-026b3c73c0a0 to disappear Mar 22 14:25:02.062: INFO: Pod client-containers-ad637cc1-5c47-41ed-bc1c-026b3c73c0a0 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:25:02.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9572" for this suite. Mar 22 14:25:08.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:25:08.166: INFO: namespace containers-9572 deletion completed in 6.100976522s • [SLOW TEST:10.246 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:25:08.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 22 14:25:08.251: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3939,SelfLink:/api/v1/namespaces/watch-3939/configmaps/e2e-watch-test-watch-closed,UID:1c98b93d-6317-44b3-830b-3ed168f33311,ResourceVersion:1252712,Generation:0,CreationTimestamp:2020-03-22 14:25:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 22 14:25:08.251: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3939,SelfLink:/api/v1/namespaces/watch-3939/configmaps/e2e-watch-test-watch-closed,UID:1c98b93d-6317-44b3-830b-3ed168f33311,ResourceVersion:1252713,Generation:0,CreationTimestamp:2020-03-22 14:25:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 22 14:25:08.261: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3939,SelfLink:/api/v1/namespaces/watch-3939/configmaps/e2e-watch-test-watch-closed,UID:1c98b93d-6317-44b3-830b-3ed168f33311,ResourceVersion:1252714,Generation:0,CreationTimestamp:2020-03-22 14:25:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 22 14:25:08.262: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3939,SelfLink:/api/v1/namespaces/watch-3939/configmaps/e2e-watch-test-watch-closed,UID:1c98b93d-6317-44b3-830b-3ed168f33311,ResourceVersion:1252715,Generation:0,CreationTimestamp:2020-03-22 14:25:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:25:08.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3939" for this suite. Mar 22 14:25:14.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:25:14.353: INFO: namespace watch-3939 deletion completed in 6.08719541s • [SLOW TEST:6.187 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:25:14.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 22 14:25:14.448: INFO: Waiting up to 5m0s for pod "downward-api-921ffb78-b9a1-40df-8ded-3fe5f318aa3f" in namespace "downward-api-5458" to be "success or failure" Mar 22 14:25:14.469: INFO: Pod "downward-api-921ffb78-b9a1-40df-8ded-3fe5f318aa3f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.3891ms Mar 22 14:25:16.475: INFO: Pod "downward-api-921ffb78-b9a1-40df-8ded-3fe5f318aa3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02636545s Mar 22 14:25:18.478: INFO: Pod "downward-api-921ffb78-b9a1-40df-8ded-3fe5f318aa3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029422669s STEP: Saw pod success Mar 22 14:25:18.478: INFO: Pod "downward-api-921ffb78-b9a1-40df-8ded-3fe5f318aa3f" satisfied condition "success or failure" Mar 22 14:25:18.481: INFO: Trying to get logs from node iruya-worker pod downward-api-921ffb78-b9a1-40df-8ded-3fe5f318aa3f container dapi-container: STEP: delete the pod Mar 22 14:25:18.501: INFO: Waiting for pod downward-api-921ffb78-b9a1-40df-8ded-3fe5f318aa3f to disappear Mar 22 14:25:18.517: INFO: Pod downward-api-921ffb78-b9a1-40df-8ded-3fe5f318aa3f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:25:18.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5458" for this suite. Mar 22 14:25:24.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:25:24.641: INFO: namespace downward-api-5458 deletion completed in 6.121186757s • [SLOW TEST:10.288 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:25:24.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 14:25:24.729: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 22 14:25:24.758: INFO: Number of nodes with available pods: 0 Mar 22 14:25:24.758: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 22 14:25:24.820: INFO: Number of nodes with available pods: 0 Mar 22 14:25:24.820: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:25.825: INFO: Number of nodes with available pods: 0 Mar 22 14:25:25.825: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:26.824: INFO: Number of nodes with available pods: 0 Mar 22 14:25:26.824: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:27.824: INFO: Number of nodes with available pods: 0 Mar 22 14:25:27.824: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:28.824: INFO: Number of nodes with available pods: 1 Mar 22 14:25:28.825: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 22 14:25:28.859: INFO: Number of nodes with available pods: 1 Mar 22 14:25:28.859: INFO: Number of running nodes: 0, number of available pods: 1 Mar 22 14:25:29.863: INFO: Number of nodes with available pods: 0 Mar 22 14:25:29.863: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 22 14:25:29.873: INFO: Number of nodes with available pods: 0 Mar 22 14:25:29.873: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:30.878: INFO: Number of nodes with available pods: 0 Mar 22 14:25:30.878: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:31.878: INFO: Number of nodes with available pods: 0 Mar 22 14:25:31.878: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:32.878: INFO: Number of nodes with available pods: 0 Mar 22 14:25:32.878: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:33.878: INFO: Number of nodes with available pods: 0 Mar 22 14:25:33.878: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:34.883: INFO: Number of nodes with available pods: 0 Mar 22 14:25:34.883: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:35.878: INFO: Number of nodes with available pods: 0 Mar 22 14:25:35.878: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:36.877: INFO: Number of nodes with available pods: 0 Mar 22 14:25:36.877: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:37.878: INFO: Number of nodes with available pods: 0 Mar 22 14:25:37.878: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:38.878: INFO: Number of nodes with available pods: 0 Mar 22 14:25:38.878: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:39.877: INFO: Number of nodes with available pods: 0 Mar 22 14:25:39.877: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:40.878: INFO: Number of nodes with available pods: 0 Mar 22 14:25:40.878: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:41.877: INFO: Number of nodes with available pods: 0 Mar 22 14:25:41.877: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:42.878: INFO: Number of nodes with available pods: 0 Mar 22 14:25:42.878: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:43.878: INFO: Number of nodes with available pods: 0 Mar 22 14:25:43.878: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:44.878: INFO: Number of nodes with available pods: 0 Mar 22 14:25:44.878: INFO: Node iruya-worker is running more than one daemon pod Mar 22 14:25:45.878: INFO: Number of nodes with available pods: 1 Mar 22 14:25:45.878: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6459, will wait for the garbage collector to delete the pods Mar 22 14:25:45.945: INFO: Deleting DaemonSet.extensions daemon-set took: 7.366385ms Mar 22 14:25:46.245: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.311192ms Mar 22 14:25:52.248: INFO: Number of nodes with available pods: 0 Mar 22 14:25:52.248: INFO: Number of running nodes: 0, number of available pods: 0 Mar 22 14:25:52.251: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6459/daemonsets","resourceVersion":"1252880"},"items":null} Mar 22 14:25:52.253: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6459/pods","resourceVersion":"1252880"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:25:52.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6459" for this suite. Mar 22 14:25:58.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:25:58.380: INFO: namespace daemonsets-6459 deletion completed in 6.093902503s • [SLOW TEST:33.738 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:25:58.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 22 14:25:58.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1356' Mar 22 14:25:58.545: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 22 14:25:58.545: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Mar 22 14:25:58.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-1356' Mar 22 14:25:58.662: INFO: stderr: "" Mar 22 14:25:58.662: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:25:58.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1356" for this suite. Mar 22 14:26:04.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:26:04.775: INFO: namespace kubectl-1356 deletion completed in 6.110066561s • [SLOW TEST:6.395 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:26:04.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 22 14:26:04.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2547' Mar 22 14:26:04.944: INFO: stderr: "" Mar 22 14:26:04.944: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Mar 22 14:26:04.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2547' Mar 22 14:26:12.168: INFO: stderr: "" Mar 22 14:26:12.168: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:26:12.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2547" for this suite. Mar 22 14:26:18.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:26:18.281: INFO: namespace kubectl-2547 deletion completed in 6.11000498s • [SLOW TEST:13.505 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:26:18.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7580.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7580.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7580.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7580.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 22 14:26:24.415: INFO: DNS probes using dns-test-84714a4e-c308-45a0-bd53-ebe4120adbe4 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7580.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7580.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7580.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7580.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 22 14:26:30.527: INFO: File wheezy_udp@dns-test-service-3.dns-7580.svc.cluster.local from pod dns-7580/dns-test-ae811c96-69a9-456a-95b5-2b73eb4109f8 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 22 14:26:30.531: INFO: File jessie_udp@dns-test-service-3.dns-7580.svc.cluster.local from pod dns-7580/dns-test-ae811c96-69a9-456a-95b5-2b73eb4109f8 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 22 14:26:30.531: INFO: Lookups using dns-7580/dns-test-ae811c96-69a9-456a-95b5-2b73eb4109f8 failed for: [wheezy_udp@dns-test-service-3.dns-7580.svc.cluster.local jessie_udp@dns-test-service-3.dns-7580.svc.cluster.local] Mar 22 14:26:35.544: INFO: File wheezy_udp@dns-test-service-3.dns-7580.svc.cluster.local from pod dns-7580/dns-test-ae811c96-69a9-456a-95b5-2b73eb4109f8 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 22 14:26:35.547: INFO: File jessie_udp@dns-test-service-3.dns-7580.svc.cluster.local from pod dns-7580/dns-test-ae811c96-69a9-456a-95b5-2b73eb4109f8 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 22 14:26:35.547: INFO: Lookups using dns-7580/dns-test-ae811c96-69a9-456a-95b5-2b73eb4109f8 failed for: [wheezy_udp@dns-test-service-3.dns-7580.svc.cluster.local jessie_udp@dns-test-service-3.dns-7580.svc.cluster.local] Mar 22 14:26:40.535: INFO: File wheezy_udp@dns-test-service-3.dns-7580.svc.cluster.local from pod dns-7580/dns-test-ae811c96-69a9-456a-95b5-2b73eb4109f8 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 22 14:26:40.539: INFO: File jessie_udp@dns-test-service-3.dns-7580.svc.cluster.local from pod dns-7580/dns-test-ae811c96-69a9-456a-95b5-2b73eb4109f8 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 22 14:26:40.539: INFO: Lookups using dns-7580/dns-test-ae811c96-69a9-456a-95b5-2b73eb4109f8 failed for: [wheezy_udp@dns-test-service-3.dns-7580.svc.cluster.local jessie_udp@dns-test-service-3.dns-7580.svc.cluster.local] Mar 22 14:26:45.536: INFO: File wheezy_udp@dns-test-service-3.dns-7580.svc.cluster.local from pod dns-7580/dns-test-ae811c96-69a9-456a-95b5-2b73eb4109f8 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 22 14:26:45.539: INFO: File jessie_udp@dns-test-service-3.dns-7580.svc.cluster.local from pod dns-7580/dns-test-ae811c96-69a9-456a-95b5-2b73eb4109f8 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 22 14:26:45.539: INFO: Lookups using dns-7580/dns-test-ae811c96-69a9-456a-95b5-2b73eb4109f8 failed for: [wheezy_udp@dns-test-service-3.dns-7580.svc.cluster.local jessie_udp@dns-test-service-3.dns-7580.svc.cluster.local] Mar 22 14:26:50.536: INFO: File wheezy_udp@dns-test-service-3.dns-7580.svc.cluster.local from pod dns-7580/dns-test-ae811c96-69a9-456a-95b5-2b73eb4109f8 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 22 14:26:50.540: INFO: File jessie_udp@dns-test-service-3.dns-7580.svc.cluster.local from pod dns-7580/dns-test-ae811c96-69a9-456a-95b5-2b73eb4109f8 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 22 14:26:50.540: INFO: Lookups using dns-7580/dns-test-ae811c96-69a9-456a-95b5-2b73eb4109f8 failed for: [wheezy_udp@dns-test-service-3.dns-7580.svc.cluster.local jessie_udp@dns-test-service-3.dns-7580.svc.cluster.local] Mar 22 14:26:55.540: INFO: DNS probes using dns-test-ae811c96-69a9-456a-95b5-2b73eb4109f8 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7580.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7580.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7580.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7580.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 22 14:27:02.069: INFO: DNS probes using dns-test-62a0f94e-4de8-4a5e-90dd-3f8767e19a71 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:27:02.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7580" for this suite. Mar 22 14:27:08.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:27:08.292: INFO: namespace dns-7580 deletion completed in 6.11258922s • [SLOW TEST:50.011 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:27:08.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0322 14:27:48.372021 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 22 14:27:48.372: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:27:48.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8623" for this suite. Mar 22 14:27:58.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:27:58.568: INFO: namespace gc-8623 deletion completed in 10.193095784s • [SLOW TEST:50.275 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:27:58.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 22 14:28:03.201: INFO: Successfully updated pod "annotationupdate1f035e5e-54eb-4692-a3fa-82f115ca1b99" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:28:05.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4393" for this suite. Mar 22 14:28:27.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:28:27.352: INFO: namespace downward-api-4393 deletion completed in 22.101620409s • [SLOW TEST:28.784 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:28:27.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Mar 22 14:28:27.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5805' Mar 22 14:28:27.763: INFO: stderr: "" Mar 22 14:28:27.763: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 22 14:28:27.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5805' Mar 22 14:28:27.865: INFO: stderr: "" Mar 22 14:28:27.865: INFO: stdout: "update-demo-nautilus-b2pcw update-demo-nautilus-zn6nd " Mar 22 14:28:27.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b2pcw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5805' Mar 22 14:28:27.955: INFO: stderr: "" Mar 22 14:28:27.955: INFO: stdout: "" Mar 22 14:28:27.955: INFO: update-demo-nautilus-b2pcw is created but not running Mar 22 14:28:32.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5805' Mar 22 14:28:33.061: INFO: stderr: "" Mar 22 14:28:33.061: INFO: stdout: "update-demo-nautilus-b2pcw update-demo-nautilus-zn6nd " Mar 22 14:28:33.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b2pcw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5805' Mar 22 14:28:33.148: INFO: stderr: "" Mar 22 14:28:33.148: INFO: stdout: "true" Mar 22 14:28:33.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b2pcw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5805' Mar 22 14:28:33.229: INFO: stderr: "" Mar 22 14:28:33.229: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 22 14:28:33.229: INFO: validating pod update-demo-nautilus-b2pcw Mar 22 14:28:33.233: INFO: got data: { "image": "nautilus.jpg" } Mar 22 14:28:33.233: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 22 14:28:33.233: INFO: update-demo-nautilus-b2pcw is verified up and running Mar 22 14:28:33.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zn6nd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5805' Mar 22 14:28:33.323: INFO: stderr: "" Mar 22 14:28:33.323: INFO: stdout: "true" Mar 22 14:28:33.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zn6nd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5805' Mar 22 14:28:33.426: INFO: stderr: "" Mar 22 14:28:33.426: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 22 14:28:33.426: INFO: validating pod update-demo-nautilus-zn6nd Mar 22 14:28:33.430: INFO: got data: { "image": "nautilus.jpg" } Mar 22 14:28:33.430: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 22 14:28:33.430: INFO: update-demo-nautilus-zn6nd is verified up and running STEP: rolling-update to new replication controller Mar 22 14:28:33.432: INFO: scanned /root for discovery docs: Mar 22 14:28:33.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5805' Mar 22 14:28:55.926: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 22 14:28:55.926: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 22 14:28:55.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5805' Mar 22 14:28:56.021: INFO: stderr: "" Mar 22 14:28:56.021: INFO: stdout: "update-demo-kitten-7s4bs update-demo-kitten-f8jj8 " Mar 22 14:28:56.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7s4bs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5805' Mar 22 14:28:56.113: INFO: stderr: "" Mar 22 14:28:56.113: INFO: stdout: "true" Mar 22 14:28:56.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7s4bs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5805' Mar 22 14:28:56.205: INFO: stderr: "" Mar 22 14:28:56.205: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 22 14:28:56.205: INFO: validating pod update-demo-kitten-7s4bs Mar 22 14:28:56.208: INFO: got data: { "image": "kitten.jpg" } Mar 22 14:28:56.208: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 22 14:28:56.208: INFO: update-demo-kitten-7s4bs is verified up and running Mar 22 14:28:56.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-f8jj8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5805' Mar 22 14:28:56.307: INFO: stderr: "" Mar 22 14:28:56.307: INFO: stdout: "true" Mar 22 14:28:56.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-f8jj8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5805' Mar 22 14:28:56.400: INFO: stderr: "" Mar 22 14:28:56.400: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 22 14:28:56.400: INFO: validating pod update-demo-kitten-f8jj8 Mar 22 14:28:56.404: INFO: got data: { "image": "kitten.jpg" } Mar 22 14:28:56.404: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 22 14:28:56.404: INFO: update-demo-kitten-f8jj8 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:28:56.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5805" for this suite. Mar 22 14:29:20.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:29:20.498: INFO: namespace kubectl-5805 deletion completed in 24.091369775s • [SLOW TEST:53.145 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:29:20.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-8d013328-8d14-4ab6-a48f-f9cbb9b75aca STEP: Creating a pod to test consume secrets Mar 22 14:29:20.639: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b90d4dd0-eb55-4e6e-b82e-bd7e3e21675f" in namespace "projected-8626" to be "success or failure" Mar 22 14:29:20.680: INFO: Pod "pod-projected-secrets-b90d4dd0-eb55-4e6e-b82e-bd7e3e21675f": Phase="Pending", Reason="", readiness=false. Elapsed: 40.76747ms Mar 22 14:29:22.684: INFO: Pod "pod-projected-secrets-b90d4dd0-eb55-4e6e-b82e-bd7e3e21675f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044542679s Mar 22 14:29:24.689: INFO: Pod "pod-projected-secrets-b90d4dd0-eb55-4e6e-b82e-bd7e3e21675f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049042565s STEP: Saw pod success Mar 22 14:29:24.689: INFO: Pod "pod-projected-secrets-b90d4dd0-eb55-4e6e-b82e-bd7e3e21675f" satisfied condition "success or failure" Mar 22 14:29:24.691: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-b90d4dd0-eb55-4e6e-b82e-bd7e3e21675f container projected-secret-volume-test: STEP: delete the pod Mar 22 14:29:24.724: INFO: Waiting for pod pod-projected-secrets-b90d4dd0-eb55-4e6e-b82e-bd7e3e21675f to disappear Mar 22 14:29:24.733: INFO: Pod pod-projected-secrets-b90d4dd0-eb55-4e6e-b82e-bd7e3e21675f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:29:24.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8626" for this suite. Mar 22 14:29:30.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:29:30.827: INFO: namespace projected-8626 deletion completed in 6.091096212s • [SLOW TEST:10.329 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:29:30.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 22 14:29:30.929: INFO: Creating deployment "test-recreate-deployment" Mar 22 14:29:30.956: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 22 14:29:30.974: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 22 14:29:32.981: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 22 14:29:32.984: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720484170, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720484170, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720484171, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720484170, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 14:29:34.987: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 22 14:29:34.995: INFO: Updating deployment test-recreate-deployment Mar 22 14:29:34.995: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 22 14:29:35.219: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-6462,SelfLink:/apis/apps/v1/namespaces/deployment-6462/deployments/test-recreate-deployment,UID:91fe8ea2-66c9-451b-af31-63cd624457cf,ResourceVersion:1253907,Generation:2,CreationTimestamp:2020-03-22 14:29:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-03-22 14:29:35 +0000 UTC 2020-03-22 14:29:35 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-03-22 14:29:35 +0000 UTC 2020-03-22 14:29:30 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Mar 22 14:29:35.225: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-6462,SelfLink:/apis/apps/v1/namespaces/deployment-6462/replicasets/test-recreate-deployment-5c8c9cc69d,UID:de4fd5ac-633e-48f6-9cb5-9886aea40081,ResourceVersion:1253903,Generation:1,CreationTimestamp:2020-03-22 14:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 91fe8ea2-66c9-451b-af31-63cd624457cf 0xc00211f6f7 0xc00211f6f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 22 14:29:35.225: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 22 14:29:35.225: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-6462,SelfLink:/apis/apps/v1/namespaces/deployment-6462/replicasets/test-recreate-deployment-6df85df6b9,UID:7a9e51b0-1c99-4f04-8f58-55b4ad7826d0,ResourceVersion:1253894,Generation:2,CreationTimestamp:2020-03-22 14:29:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 91fe8ea2-66c9-451b-af31-63cd624457cf 0xc00211f7c7 0xc00211f7c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 22 14:29:35.228: INFO: Pod "test-recreate-deployment-5c8c9cc69d-nkmjr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-nkmjr,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-6462,SelfLink:/api/v1/namespaces/deployment-6462/pods/test-recreate-deployment-5c8c9cc69d-nkmjr,UID:91f92baa-5192-4b92-bbda-e8c9af211c67,ResourceVersion:1253908,Generation:0,CreationTimestamp:2020-03-22 14:29:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d de4fd5ac-633e-48f6-9cb5-9886aea40081 0xc002b68a97 0xc002b68a98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47mq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47mq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-47mq2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b68bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b68be0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:29:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:29:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:29:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:29:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-22 14:29:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:29:35.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6462" for this suite. Mar 22 14:29:41.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:29:41.362: INFO: namespace deployment-6462 deletion completed in 6.130829909s • [SLOW TEST:10.534 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:29:41.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 22 14:29:41.423: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 22 14:29:50.493: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:29:50.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9495" for this suite. Mar 22 14:29:56.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:29:56.612: INFO: namespace pods-9495 deletion completed in 6.111415828s • [SLOW TEST:15.250 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:29:56.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 22 14:29:56.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3325' Mar 22 14:29:59.126: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 22 14:29:59.126: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Mar 22 14:29:59.132: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 22 14:29:59.159: INFO: scanned /root for discovery docs: Mar 22 14:29:59.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3325' Mar 22 14:30:15.643: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 22 14:30:15.643: INFO: stdout: "Created e2e-test-nginx-rc-d4aaeca41c30569ac6e683076b33261b\nScaling up e2e-test-nginx-rc-d4aaeca41c30569ac6e683076b33261b from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d4aaeca41c30569ac6e683076b33261b up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-d4aaeca41c30569ac6e683076b33261b to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Mar 22 14:30:15.643: INFO: stdout: "Created e2e-test-nginx-rc-d4aaeca41c30569ac6e683076b33261b\nScaling up e2e-test-nginx-rc-d4aaeca41c30569ac6e683076b33261b from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d4aaeca41c30569ac6e683076b33261b up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-d4aaeca41c30569ac6e683076b33261b to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Mar 22 14:30:15.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3325' Mar 22 14:30:15.726: INFO: stderr: "" Mar 22 14:30:15.726: INFO: stdout: "e2e-test-nginx-rc-d4aaeca41c30569ac6e683076b33261b-vsbl8 " Mar 22 14:30:15.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-d4aaeca41c30569ac6e683076b33261b-vsbl8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3325' Mar 22 14:30:15.815: INFO: stderr: "" Mar 22 14:30:15.815: INFO: stdout: "true" Mar 22 14:30:15.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-d4aaeca41c30569ac6e683076b33261b-vsbl8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3325' Mar 22 14:30:15.893: INFO: stderr: "" Mar 22 14:30:15.893: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Mar 22 14:30:15.893: INFO: e2e-test-nginx-rc-d4aaeca41c30569ac6e683076b33261b-vsbl8 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Mar 22 14:30:15.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3325' Mar 22 14:30:16.002: INFO: stderr: "" Mar 22 14:30:16.002: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:30:16.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3325" for this suite. Mar 22 14:30:38.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:30:38.155: INFO: namespace kubectl-3325 deletion completed in 22.127274785s • [SLOW TEST:41.542 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:30:38.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-338 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-338 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-338 Mar 22 14:30:38.232: INFO: Found 0 stateful pods, waiting for 1 Mar 22 14:30:48.237: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 22 14:30:48.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 22 14:30:48.461: INFO: stderr: "I0322 14:30:48.365473 2958 log.go:172] (0xc0001166e0) (0xc0009486e0) Create stream\nI0322 14:30:48.365537 2958 log.go:172] (0xc0001166e0) (0xc0009486e0) Stream added, broadcasting: 1\nI0322 14:30:48.368154 2958 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0322 14:30:48.368214 2958 log.go:172] (0xc0001166e0) (0xc0006c0280) Create stream\nI0322 14:30:48.368238 2958 log.go:172] (0xc0001166e0) (0xc0006c0280) Stream added, broadcasting: 3\nI0322 14:30:48.369346 2958 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0322 14:30:48.369399 2958 log.go:172] (0xc0001166e0) (0xc00094a000) Create stream\nI0322 14:30:48.369418 2958 log.go:172] (0xc0001166e0) (0xc00094a000) Stream added, broadcasting: 5\nI0322 14:30:48.370489 2958 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0322 14:30:48.423618 2958 log.go:172] (0xc0001166e0) Data frame received for 5\nI0322 14:30:48.423645 2958 log.go:172] (0xc00094a000) (5) Data frame handling\nI0322 14:30:48.423661 2958 log.go:172] (0xc00094a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0322 14:30:48.454044 2958 log.go:172] (0xc0001166e0) Data frame received for 3\nI0322 14:30:48.454091 2958 log.go:172] (0xc0006c0280) (3) Data frame handling\nI0322 14:30:48.454125 2958 log.go:172] (0xc0006c0280) (3) Data frame sent\nI0322 14:30:48.454277 2958 log.go:172] (0xc0001166e0) Data frame received for 3\nI0322 14:30:48.454309 2958 log.go:172] (0xc0006c0280) (3) Data frame handling\nI0322 14:30:48.454336 2958 log.go:172] (0xc0001166e0) Data frame received for 5\nI0322 14:30:48.454361 2958 log.go:172] (0xc00094a000) (5) Data frame handling\nI0322 14:30:48.456527 2958 log.go:172] (0xc0001166e0) Data frame received for 1\nI0322 14:30:48.456568 2958 log.go:172] (0xc0009486e0) (1) Data frame handling\nI0322 14:30:48.456597 2958 log.go:172] (0xc0009486e0) (1) Data frame sent\nI0322 14:30:48.456628 2958 log.go:172] (0xc0001166e0) (0xc0009486e0) Stream removed, broadcasting: 1\nI0322 14:30:48.456658 2958 log.go:172] (0xc0001166e0) Go away received\nI0322 14:30:48.457038 2958 log.go:172] (0xc0001166e0) (0xc0009486e0) Stream removed, broadcasting: 1\nI0322 14:30:48.457063 2958 log.go:172] (0xc0001166e0) (0xc0006c0280) Stream removed, broadcasting: 3\nI0322 14:30:48.457079 2958 log.go:172] (0xc0001166e0) (0xc00094a000) Stream removed, broadcasting: 5\n" Mar 22 14:30:48.461: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 22 14:30:48.461: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 22 14:30:48.465: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 22 14:30:58.470: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 22 14:30:58.470: INFO: Waiting for statefulset status.replicas updated to 0 Mar 22 14:30:58.484: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 14:30:58.484: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC }] Mar 22 14:30:58.484: INFO: Mar 22 14:30:58.484: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 22 14:30:59.489: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.9963868s Mar 22 14:31:00.493: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991385318s Mar 22 14:31:01.498: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987745024s Mar 22 14:31:02.504: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.982287101s Mar 22 14:31:03.508: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.976864034s Mar 22 14:31:04.514: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.972073164s Mar 22 14:31:05.519: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.966673663s Mar 22 14:31:06.525: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.961603618s Mar 22 14:31:07.529: INFO: Verifying statefulset ss doesn't scale past 3 for another 955.794639ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-338 Mar 22 14:31:08.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:31:08.774: INFO: stderr: "I0322 14:31:08.683086 2980 log.go:172] (0xc000a0c6e0) (0xc000498aa0) Create stream\nI0322 14:31:08.683165 2980 log.go:172] (0xc000a0c6e0) (0xc000498aa0) Stream added, broadcasting: 1\nI0322 14:31:08.687579 2980 log.go:172] (0xc000a0c6e0) Reply frame received for 1\nI0322 14:31:08.687631 2980 log.go:172] (0xc000a0c6e0) (0xc0004981e0) Create stream\nI0322 14:31:08.687645 2980 log.go:172] (0xc000a0c6e0) (0xc0004981e0) Stream added, broadcasting: 3\nI0322 14:31:08.688780 2980 log.go:172] (0xc000a0c6e0) Reply frame received for 3\nI0322 14:31:08.688832 2980 log.go:172] (0xc000a0c6e0) (0xc00002e000) Create stream\nI0322 14:31:08.688846 2980 log.go:172] (0xc000a0c6e0) (0xc00002e000) Stream added, broadcasting: 5\nI0322 14:31:08.690133 2980 log.go:172] (0xc000a0c6e0) Reply frame received for 5\nI0322 14:31:08.767586 2980 log.go:172] (0xc000a0c6e0) Data frame received for 5\nI0322 14:31:08.767613 2980 log.go:172] (0xc00002e000) (5) Data frame handling\nI0322 14:31:08.767625 2980 log.go:172] (0xc00002e000) (5) Data frame sent\nI0322 14:31:08.767634 2980 log.go:172] (0xc000a0c6e0) Data frame received for 5\nI0322 14:31:08.767640 2980 log.go:172] (0xc00002e000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0322 14:31:08.767670 2980 log.go:172] (0xc000a0c6e0) Data frame received for 3\nI0322 14:31:08.767722 2980 log.go:172] (0xc0004981e0) (3) Data frame handling\nI0322 14:31:08.767743 2980 log.go:172] (0xc0004981e0) (3) Data frame sent\nI0322 14:31:08.767757 2980 log.go:172] (0xc000a0c6e0) Data frame received for 3\nI0322 14:31:08.767769 2980 log.go:172] (0xc0004981e0) (3) Data frame handling\nI0322 14:31:08.769075 2980 log.go:172] (0xc000a0c6e0) Data frame received for 1\nI0322 14:31:08.769090 2980 log.go:172] (0xc000498aa0) (1) Data frame handling\nI0322 14:31:08.769198 2980 log.go:172] (0xc000498aa0) (1) Data frame sent\nI0322 14:31:08.769278 2980 log.go:172] (0xc000a0c6e0) (0xc000498aa0) Stream removed, broadcasting: 1\nI0322 14:31:08.769295 2980 log.go:172] (0xc000a0c6e0) Go away received\nI0322 14:31:08.769742 2980 log.go:172] (0xc000a0c6e0) (0xc000498aa0) Stream removed, broadcasting: 1\nI0322 14:31:08.769782 2980 log.go:172] (0xc000a0c6e0) (0xc0004981e0) Stream removed, broadcasting: 3\nI0322 14:31:08.769808 2980 log.go:172] (0xc000a0c6e0) (0xc00002e000) Stream removed, broadcasting: 5\n" Mar 22 14:31:08.774: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 22 14:31:08.774: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 22 14:31:08.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:31:08.973: INFO: stderr: "I0322 14:31:08.902096 3000 log.go:172] (0xc00038a0b0) (0xc000286640) Create stream\nI0322 14:31:08.902158 3000 log.go:172] (0xc00038a0b0) (0xc000286640) Stream added, broadcasting: 1\nI0322 14:31:08.905022 3000 log.go:172] (0xc00038a0b0) Reply frame received for 1\nI0322 14:31:08.905066 3000 log.go:172] (0xc00038a0b0) (0xc0005703c0) Create stream\nI0322 14:31:08.905079 3000 log.go:172] (0xc00038a0b0) (0xc0005703c0) Stream added, broadcasting: 3\nI0322 14:31:08.906076 3000 log.go:172] (0xc00038a0b0) Reply frame received for 3\nI0322 14:31:08.906108 3000 log.go:172] (0xc00038a0b0) (0xc000840000) Create stream\nI0322 14:31:08.906123 3000 log.go:172] (0xc00038a0b0) (0xc000840000) Stream added, broadcasting: 5\nI0322 14:31:08.907166 3000 log.go:172] (0xc00038a0b0) Reply frame received for 5\nI0322 14:31:08.966442 3000 log.go:172] (0xc00038a0b0) Data frame received for 3\nI0322 14:31:08.966469 3000 log.go:172] (0xc0005703c0) (3) Data frame handling\nI0322 14:31:08.966481 3000 log.go:172] (0xc0005703c0) (3) Data frame sent\nI0322 14:31:08.966512 3000 log.go:172] (0xc00038a0b0) Data frame received for 5\nI0322 14:31:08.966579 3000 log.go:172] (0xc000840000) (5) Data frame handling\nI0322 14:31:08.966626 3000 log.go:172] (0xc000840000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0322 14:31:08.966658 3000 log.go:172] (0xc00038a0b0) Data frame received for 5\nI0322 14:31:08.966674 3000 log.go:172] (0xc000840000) (5) Data frame handling\nI0322 14:31:08.966699 3000 log.go:172] (0xc00038a0b0) Data frame received for 3\nI0322 14:31:08.966713 3000 log.go:172] (0xc0005703c0) (3) Data frame handling\nI0322 14:31:08.968967 3000 log.go:172] (0xc00038a0b0) Data frame received for 1\nI0322 14:31:08.968987 3000 log.go:172] (0xc000286640) (1) Data frame handling\nI0322 14:31:08.968996 3000 log.go:172] (0xc000286640) (1) Data frame sent\nI0322 14:31:08.969012 3000 log.go:172] (0xc00038a0b0) (0xc000286640) Stream removed, broadcasting: 1\nI0322 14:31:08.969027 3000 log.go:172] (0xc00038a0b0) Go away received\nI0322 14:31:08.969482 3000 log.go:172] (0xc00038a0b0) (0xc000286640) Stream removed, broadcasting: 1\nI0322 14:31:08.969493 3000 log.go:172] (0xc00038a0b0) (0xc0005703c0) Stream removed, broadcasting: 3\nI0322 14:31:08.969498 3000 log.go:172] (0xc00038a0b0) (0xc000840000) Stream removed, broadcasting: 5\n" Mar 22 14:31:08.973: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 22 14:31:08.973: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 22 14:31:08.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:31:09.177: INFO: stderr: "I0322 14:31:09.107432 3022 log.go:172] (0xc00013adc0) (0xc000656820) Create stream\nI0322 14:31:09.107503 3022 log.go:172] (0xc00013adc0) (0xc000656820) Stream added, broadcasting: 1\nI0322 14:31:09.111133 3022 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0322 14:31:09.111189 3022 log.go:172] (0xc00013adc0) (0xc0006ae280) Create stream\nI0322 14:31:09.111209 3022 log.go:172] (0xc00013adc0) (0xc0006ae280) Stream added, broadcasting: 3\nI0322 14:31:09.112183 3022 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0322 14:31:09.112234 3022 log.go:172] (0xc00013adc0) (0xc000656000) Create stream\nI0322 14:31:09.112260 3022 log.go:172] (0xc00013adc0) (0xc000656000) Stream added, broadcasting: 5\nI0322 14:31:09.113068 3022 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0322 14:31:09.170577 3022 log.go:172] (0xc00013adc0) Data frame received for 5\nI0322 14:31:09.170617 3022 log.go:172] (0xc000656000) (5) Data frame handling\nI0322 14:31:09.170632 3022 log.go:172] (0xc000656000) (5) Data frame sent\nI0322 14:31:09.170644 3022 log.go:172] (0xc00013adc0) Data frame received for 5\nI0322 14:31:09.170653 3022 log.go:172] (0xc000656000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0322 14:31:09.170679 3022 log.go:172] (0xc00013adc0) Data frame received for 3\nI0322 14:31:09.170688 3022 log.go:172] (0xc0006ae280) (3) Data frame handling\nI0322 14:31:09.170705 3022 log.go:172] (0xc0006ae280) (3) Data frame sent\nI0322 14:31:09.170716 3022 log.go:172] (0xc00013adc0) Data frame received for 3\nI0322 14:31:09.170732 3022 log.go:172] (0xc0006ae280) (3) Data frame handling\nI0322 14:31:09.172318 3022 log.go:172] (0xc00013adc0) Data frame received for 1\nI0322 14:31:09.172360 3022 log.go:172] (0xc000656820) (1) Data frame handling\nI0322 14:31:09.172383 3022 log.go:172] (0xc000656820) (1) Data frame sent\nI0322 14:31:09.172418 3022 log.go:172] (0xc00013adc0) (0xc000656820) Stream removed, broadcasting: 1\nI0322 14:31:09.172449 3022 log.go:172] (0xc00013adc0) Go away received\nI0322 14:31:09.172816 3022 log.go:172] (0xc00013adc0) (0xc000656820) Stream removed, broadcasting: 1\nI0322 14:31:09.172837 3022 log.go:172] (0xc00013adc0) (0xc0006ae280) Stream removed, broadcasting: 3\nI0322 14:31:09.172849 3022 log.go:172] (0xc00013adc0) (0xc000656000) Stream removed, broadcasting: 5\n" Mar 22 14:31:09.177: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 22 14:31:09.177: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 22 14:31:09.181: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 22 14:31:19.187: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 22 14:31:19.187: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 22 14:31:19.187: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 22 14:31:19.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 22 14:31:19.503: INFO: stderr: "I0322 14:31:19.408462 3043 log.go:172] (0xc000116dc0) (0xc0007d2a00) Create stream\nI0322 14:31:19.408487 3043 log.go:172] (0xc000116dc0) (0xc0007d2a00) Stream added, broadcasting: 1\nI0322 14:31:19.424266 3043 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0322 14:31:19.424306 3043 log.go:172] (0xc000116dc0) (0xc0007d2280) Create stream\nI0322 14:31:19.424315 3043 log.go:172] (0xc000116dc0) (0xc0007d2280) Stream added, broadcasting: 3\nI0322 14:31:19.425538 3043 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0322 14:31:19.425577 3043 log.go:172] (0xc000116dc0) (0xc0002cfae0) Create stream\nI0322 14:31:19.425588 3043 log.go:172] (0xc000116dc0) (0xc0002cfae0) Stream added, broadcasting: 5\nI0322 14:31:19.426353 3043 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0322 14:31:19.496666 3043 log.go:172] (0xc000116dc0) Data frame received for 5\nI0322 14:31:19.496772 3043 log.go:172] (0xc0002cfae0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0322 14:31:19.496823 3043 log.go:172] (0xc000116dc0) Data frame received for 3\nI0322 14:31:19.496868 3043 log.go:172] (0xc0007d2280) (3) Data frame handling\nI0322 14:31:19.496890 3043 log.go:172] (0xc0007d2280) (3) Data frame sent\nI0322 14:31:19.496898 3043 log.go:172] (0xc000116dc0) Data frame received for 3\nI0322 14:31:19.496911 3043 log.go:172] (0xc0007d2280) (3) Data frame handling\nI0322 14:31:19.496961 3043 log.go:172] (0xc0002cfae0) (5) Data frame sent\nI0322 14:31:19.496978 3043 log.go:172] (0xc000116dc0) Data frame received for 5\nI0322 14:31:19.496983 3043 log.go:172] (0xc0002cfae0) (5) Data frame handling\nI0322 14:31:19.498443 3043 log.go:172] (0xc000116dc0) Data frame received for 1\nI0322 14:31:19.498474 3043 log.go:172] (0xc0007d2a00) (1) Data frame handling\nI0322 14:31:19.498492 3043 log.go:172] (0xc0007d2a00) (1) Data frame sent\nI0322 14:31:19.498510 3043 log.go:172] (0xc000116dc0) (0xc0007d2a00) Stream removed, broadcasting: 1\nI0322 14:31:19.498971 3043 log.go:172] (0xc000116dc0) (0xc0007d2a00) Stream removed, broadcasting: 1\nI0322 14:31:19.498994 3043 log.go:172] (0xc000116dc0) (0xc0007d2280) Stream removed, broadcasting: 3\nI0322 14:31:19.499004 3043 log.go:172] (0xc000116dc0) (0xc0002cfae0) Stream removed, broadcasting: 5\n" Mar 22 14:31:19.504: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 22 14:31:19.504: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 22 14:31:19.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 22 14:31:19.796: INFO: stderr: "I0322 14:31:19.690488 3066 log.go:172] (0xc00092e370) (0xc00040a820) Create stream\nI0322 14:31:19.690556 3066 log.go:172] (0xc00092e370) (0xc00040a820) Stream added, broadcasting: 1\nI0322 14:31:19.696051 3066 log.go:172] (0xc00092e370) Reply frame received for 1\nI0322 14:31:19.696110 3066 log.go:172] (0xc00092e370) (0xc000424460) Create stream\nI0322 14:31:19.696122 3066 log.go:172] (0xc00092e370) (0xc000424460) Stream added, broadcasting: 3\nI0322 14:31:19.697747 3066 log.go:172] (0xc00092e370) Reply frame received for 3\nI0322 14:31:19.697791 3066 log.go:172] (0xc00092e370) (0xc00040a8c0) Create stream\nI0322 14:31:19.697804 3066 log.go:172] (0xc00092e370) (0xc00040a8c0) Stream added, broadcasting: 5\nI0322 14:31:19.698905 3066 log.go:172] (0xc00092e370) Reply frame received for 5\nI0322 14:31:19.756656 3066 log.go:172] (0xc00092e370) Data frame received for 5\nI0322 14:31:19.756689 3066 log.go:172] (0xc00040a8c0) (5) Data frame handling\nI0322 14:31:19.756703 3066 log.go:172] (0xc00040a8c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0322 14:31:19.789680 3066 log.go:172] (0xc00092e370) Data frame received for 3\nI0322 14:31:19.789727 3066 log.go:172] (0xc000424460) (3) Data frame handling\nI0322 14:31:19.789759 3066 log.go:172] (0xc000424460) (3) Data frame sent\nI0322 14:31:19.789779 3066 log.go:172] (0xc00092e370) Data frame received for 3\nI0322 14:31:19.789796 3066 log.go:172] (0xc000424460) (3) Data frame handling\nI0322 14:31:19.789963 3066 log.go:172] (0xc00092e370) Data frame received for 5\nI0322 14:31:19.789997 3066 log.go:172] (0xc00040a8c0) (5) Data frame handling\nI0322 14:31:19.791683 3066 log.go:172] (0xc00092e370) Data frame received for 1\nI0322 14:31:19.791714 3066 log.go:172] (0xc00040a820) (1) Data frame handling\nI0322 14:31:19.791735 3066 log.go:172] (0xc00040a820) (1) Data frame sent\nI0322 14:31:19.791763 3066 log.go:172] (0xc00092e370) (0xc00040a820) Stream removed, broadcasting: 1\nI0322 14:31:19.791789 3066 log.go:172] (0xc00092e370) Go away received\nI0322 14:31:19.792356 3066 log.go:172] (0xc00092e370) (0xc00040a820) Stream removed, broadcasting: 1\nI0322 14:31:19.792385 3066 log.go:172] (0xc00092e370) (0xc000424460) Stream removed, broadcasting: 3\nI0322 14:31:19.792399 3066 log.go:172] (0xc00092e370) (0xc00040a8c0) Stream removed, broadcasting: 5\n" Mar 22 14:31:19.796: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 22 14:31:19.796: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 22 14:31:19.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 22 14:31:20.041: INFO: stderr: "I0322 14:31:19.921017 3087 log.go:172] (0xc000a0a160) (0xc00056c500) Create stream\nI0322 14:31:19.921082 3087 log.go:172] (0xc000a0a160) (0xc00056c500) Stream added, broadcasting: 1\nI0322 14:31:19.923425 3087 log.go:172] (0xc000a0a160) Reply frame received for 1\nI0322 14:31:19.923462 3087 log.go:172] (0xc000a0a160) (0xc00081c000) Create stream\nI0322 14:31:19.923470 3087 log.go:172] (0xc000a0a160) (0xc00081c000) Stream added, broadcasting: 3\nI0322 14:31:19.924399 3087 log.go:172] (0xc000a0a160) Reply frame received for 3\nI0322 14:31:19.924450 3087 log.go:172] (0xc000a0a160) (0xc0002e6000) Create stream\nI0322 14:31:19.924466 3087 log.go:172] (0xc000a0a160) (0xc0002e6000) Stream added, broadcasting: 5\nI0322 14:31:19.925494 3087 log.go:172] (0xc000a0a160) Reply frame received for 5\nI0322 14:31:19.995266 3087 log.go:172] (0xc000a0a160) Data frame received for 5\nI0322 14:31:19.995296 3087 log.go:172] (0xc0002e6000) (5) Data frame handling\nI0322 14:31:19.995328 3087 log.go:172] (0xc0002e6000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0322 14:31:20.035621 3087 log.go:172] (0xc000a0a160) Data frame received for 3\nI0322 14:31:20.035641 3087 log.go:172] (0xc00081c000) (3) Data frame handling\nI0322 14:31:20.035718 3087 log.go:172] (0xc00081c000) (3) Data frame sent\nI0322 14:31:20.035897 3087 log.go:172] (0xc000a0a160) Data frame received for 3\nI0322 14:31:20.035977 3087 log.go:172] (0xc00081c000) (3) Data frame handling\nI0322 14:31:20.036024 3087 log.go:172] (0xc000a0a160) Data frame received for 5\nI0322 14:31:20.036037 3087 log.go:172] (0xc0002e6000) (5) Data frame handling\nI0322 14:31:20.037950 3087 log.go:172] (0xc000a0a160) Data frame received for 1\nI0322 14:31:20.037976 3087 log.go:172] (0xc00056c500) (1) Data frame handling\nI0322 14:31:20.037986 3087 log.go:172] (0xc00056c500) (1) Data frame sent\nI0322 14:31:20.037998 3087 log.go:172] (0xc000a0a160) (0xc00056c500) Stream removed, broadcasting: 1\nI0322 14:31:20.038029 3087 log.go:172] (0xc000a0a160) Go away received\nI0322 14:31:20.038292 3087 log.go:172] (0xc000a0a160) (0xc00056c500) Stream removed, broadcasting: 1\nI0322 14:31:20.038306 3087 log.go:172] (0xc000a0a160) (0xc00081c000) Stream removed, broadcasting: 3\nI0322 14:31:20.038315 3087 log.go:172] (0xc000a0a160) (0xc0002e6000) Stream removed, broadcasting: 5\n" Mar 22 14:31:20.042: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 22 14:31:20.042: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 22 14:31:20.042: INFO: Waiting for statefulset status.replicas updated to 0 Mar 22 14:31:20.046: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 22 14:31:30.055: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 22 14:31:30.055: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 22 14:31:30.055: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 22 14:31:30.065: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 14:31:30.065: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC }] Mar 22 14:31:30.065: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:30.065: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:30.065: INFO: Mar 22 14:31:30.065: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 22 14:31:31.070: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 14:31:31.070: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC }] Mar 22 14:31:31.070: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:31.070: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:31.070: INFO: Mar 22 14:31:31.070: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 22 14:31:32.075: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 14:31:32.075: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC }] Mar 22 14:31:32.075: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:32.075: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:32.075: INFO: Mar 22 14:31:32.075: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 22 14:31:33.080: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 14:31:33.081: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC }] Mar 22 14:31:33.081: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:33.081: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:33.081: INFO: Mar 22 14:31:33.081: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 22 14:31:34.086: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 14:31:34.086: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC }] Mar 22 14:31:34.086: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:34.086: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:34.086: INFO: Mar 22 14:31:34.086: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 22 14:31:35.091: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 14:31:35.091: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC }] Mar 22 14:31:35.091: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:35.091: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:35.091: INFO: Mar 22 14:31:35.091: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 22 14:31:36.095: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 14:31:36.095: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC }] Mar 22 14:31:36.095: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:36.095: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:36.095: INFO: Mar 22 14:31:36.095: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 22 14:31:37.100: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 14:31:37.100: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC }] Mar 22 14:31:37.100: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:37.100: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:37.100: INFO: Mar 22 14:31:37.100: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 22 14:31:38.105: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 14:31:38.105: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC }] Mar 22 14:31:38.105: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:38.105: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:38.105: INFO: Mar 22 14:31:38.105: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 22 14:31:39.110: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 14:31:39.110: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:38 +0000 UTC }] Mar 22 14:31:39.110: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:39.110: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:31:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-22 14:30:58 +0000 UTC }] Mar 22 14:31:39.110: INFO: Mar 22 14:31:39.110: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-338 Mar 22 14:31:40.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:31:40.245: INFO: rc: 1 Mar 22 14:31:40.245: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc002c3a840 exit status 1 true [0xc002798008 0xc002798020 0xc002798048] [0xc002798008 0xc002798020 0xc002798048] [0xc002798018 0xc002798030] [0xba70e0 0xba70e0] 0xc002581e60 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Mar 22 14:31:50.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:31:50.345: INFO: rc: 1 Mar 22 14:31:50.346: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d10090 exit status 1 true [0xc002c04000 0xc002c04018 0xc002c04030] [0xc002c04000 0xc002c04018 0xc002c04030] [0xc002c04010 0xc002c04028] [0xba70e0 0xba70e0] 0xc002248000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:32:00.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:32:00.438: INFO: rc: 1 Mar 22 14:32:00.438: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d10150 exit status 1 true [0xc002c04038 0xc002c04050 0xc002c04068] [0xc002c04038 0xc002c04050 0xc002c04068] [0xc002c04048 0xc002c04060] [0xba70e0 0xba70e0] 0xc0022483c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:32:10.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:32:10.533: INFO: rc: 1 Mar 22 14:32:10.533: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d060c0 exit status 1 true [0xc0023d4000 0xc0023d4018 0xc0023d4030] [0xc0023d4000 0xc0023d4018 0xc0023d4030] [0xc0023d4010 0xc0023d4028] [0xba70e0 0xba70e0] 0xc00280e2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:32:20.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:32:20.627: INFO: rc: 1 Mar 22 14:32:20.627: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d061b0 exit status 1 true [0xc0023d4038 0xc0023d4050 0xc0023d4068] [0xc0023d4038 0xc0023d4050 0xc0023d4068] [0xc0023d4048 0xc0023d4060] [0xba70e0 0xba70e0] 0xc00280e5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:32:30.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:32:30.727: INFO: rc: 1 Mar 22 14:32:30.727: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c3a960 exit status 1 true [0xc002798068 0xc002798098 0xc0027980e0] [0xc002798068 0xc002798098 0xc0027980e0] [0xc002798090 0xc0027980c0] [0xba70e0 0xba70e0] 0xc003282780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:32:40.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:32:40.820: INFO: rc: 1 Mar 22 14:32:40.820: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d06270 exit status 1 true [0xc0023d4070 0xc0023d4088 0xc0023d40a0] [0xc0023d4070 0xc0023d4088 0xc0023d40a0] [0xc0023d4080 0xc0023d4098] [0xba70e0 0xba70e0] 0xc00280e900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:32:50.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:32:50.920: INFO: rc: 1 Mar 22 14:32:50.920: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c3aa50 exit status 1 true [0xc0027980f8 0xc002798128 0xc002798148] [0xc0027980f8 0xc002798128 0xc002798148] [0xc002798120 0xc002798138] [0xba70e0 0xba70e0] 0xc003282c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:33:00.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:33:01.020: INFO: rc: 1 Mar 22 14:33:01.020: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d06360 exit status 1 true [0xc0023d40a8 0xc0023d40c0 0xc0023d40d8] [0xc0023d40a8 0xc0023d40c0 0xc0023d40d8] [0xc0023d40b8 0xc0023d40d0] [0xba70e0 0xba70e0] 0xc00280ee40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:33:11.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:33:11.121: INFO: rc: 1 Mar 22 14:33:11.121: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d06420 exit status 1 true [0xc0023d40e0 0xc0023d40f8 0xc0023d4110] [0xc0023d40e0 0xc0023d40f8 0xc0023d4110] [0xc0023d40f0 0xc0023d4108] [0xba70e0 0xba70e0] 0xc00280f1a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:33:21.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:33:21.228: INFO: rc: 1 Mar 22 14:33:21.228: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d064e0 exit status 1 true [0xc0023d4118 0xc0023d4130 0xc0023d4148] [0xc0023d4118 0xc0023d4130 0xc0023d4148] [0xc0023d4128 0xc0023d4140] [0xba70e0 0xba70e0] 0xc00280f5c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:33:31.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:33:31.333: INFO: rc: 1 Mar 22 14:33:31.334: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00298a120 exit status 1 true [0xc002e12000 0xc002e12018 0xc002e12030] [0xc002e12000 0xc002e12018 0xc002e12030] [0xc002e12010 0xc002e12028] [0xba70e0 0xba70e0] 0xc0027a0660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:33:41.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:33:41.428: INFO: rc: 1 Mar 22 14:33:41.428: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00298a240 exit status 1 true [0xc002e12040 0xc002e12058 0xc002e12070] [0xc002e12040 0xc002e12058 0xc002e12070] [0xc002e12050 0xc002e12068] [0xba70e0 0xba70e0] 0xc0027a0d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:33:51.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:33:51.520: INFO: rc: 1 Mar 22 14:33:51.520: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00298a270 exit status 1 true [0xc002e12000 0xc002e12018 0xc002e12030] [0xc002e12000 0xc002e12018 0xc002e12030] [0xc002e12010 0xc002e12028] [0xba70e0 0xba70e0] 0xc002581aa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:34:01.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:34:01.627: INFO: rc: 1 Mar 22 14:34:01.627: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c3a0c0 exit status 1 true [0xc0023d4000 0xc0023d4018 0xc0023d4030] [0xc0023d4000 0xc0023d4018 0xc0023d4030] [0xc0023d4010 0xc0023d4028] [0xba70e0 0xba70e0] 0xc0027a0060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:34:11.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:34:11.720: INFO: rc: 1 Mar 22 14:34:11.720: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d100f0 exit status 1 true [0xc002798008 0xc002798020 0xc002798048] [0xc002798008 0xc002798020 0xc002798048] [0xc002798018 0xc002798030] [0xba70e0 0xba70e0] 0xc00280e2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:34:21.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:34:21.821: INFO: rc: 1 Mar 22 14:34:21.821: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d10210 exit status 1 true [0xc002798068 0xc002798098 0xc0027980e0] [0xc002798068 0xc002798098 0xc0027980e0] [0xc002798090 0xc0027980c0] [0xba70e0 0xba70e0] 0xc00280e5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:34:31.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:34:31.908: INFO: rc: 1 Mar 22 14:34:31.908: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d102d0 exit status 1 true [0xc0027980f8 0xc002798128 0xc002798148] [0xc0027980f8 0xc002798128 0xc002798148] [0xc002798120 0xc002798138] [0xba70e0 0xba70e0] 0xc00280e900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:34:41.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:34:42.013: INFO: rc: 1 Mar 22 14:34:42.013: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c3a210 exit status 1 true [0xc0023d4038 0xc0023d4050 0xc0023d4068] [0xc0023d4038 0xc0023d4050 0xc0023d4068] [0xc0023d4048 0xc0023d4060] [0xba70e0 0xba70e0] 0xc0027a0de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:34:52.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:34:52.113: INFO: rc: 1 Mar 22 14:34:52.113: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d10390 exit status 1 true [0xc002798168 0xc0027981a8 0xc0027981e0] [0xc002798168 0xc0027981a8 0xc0027981e0] [0xc002798190 0xc0027981d8] [0xba70e0 0xba70e0] 0xc00280ee40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:35:02.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:35:02.256: INFO: rc: 1 Mar 22 14:35:02.256: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00298a330 exit status 1 true [0xc002e12078 0xc002e12090 0xc002e120a8] [0xc002e12078 0xc002e12090 0xc002e120a8] [0xc002e12088 0xc002e120a0] [0xba70e0 0xba70e0] 0xc003282720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:35:12.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:35:12.355: INFO: rc: 1 Mar 22 14:35:12.356: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d10450 exit status 1 true [0xc0027981e8 0xc002798220 0xc002798260] [0xc0027981e8 0xc002798220 0xc002798260] [0xc002798200 0xc002798248] [0xba70e0 0xba70e0] 0xc00280f1a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:35:22.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:35:22.454: INFO: rc: 1 Mar 22 14:35:22.454: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c3a870 exit status 1 true [0xc0023d4070 0xc0023d4088 0xc0023d40a0] [0xc0023d4070 0xc0023d4088 0xc0023d40a0] [0xc0023d4080 0xc0023d4098] [0xba70e0 0xba70e0] 0xc0027a1560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:35:32.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:35:32.551: INFO: rc: 1 Mar 22 14:35:32.551: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d10540 exit status 1 true [0xc002798270 0xc0027982c8 0xc002798308] [0xc002798270 0xc0027982c8 0xc002798308] [0xc0027982a8 0xc0027982f0] [0xba70e0 0xba70e0] 0xc00280f5c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:35:42.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:35:42.656: INFO: rc: 1 Mar 22 14:35:42.656: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d10660 exit status 1 true [0xc002798318 0xc002798360 0xc0027983a0] [0xc002798318 0xc002798360 0xc0027983a0] [0xc002798350 0xc002798398] [0xba70e0 0xba70e0] 0xc00280fb00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:35:52.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:35:52.751: INFO: rc: 1 Mar 22 14:35:52.751: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d10090 exit status 1 true [0xc002798008 0xc002798020 0xc002798048] [0xc002798008 0xc002798020 0xc002798048] [0xc002798018 0xc002798030] [0xba70e0 0xba70e0] 0xc002580060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:36:02.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:36:02.848: INFO: rc: 1 Mar 22 14:36:02.848: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c3a0f0 exit status 1 true [0xc0023d4000 0xc0023d4018 0xc0023d4030] [0xc0023d4000 0xc0023d4018 0xc0023d4030] [0xc0023d4010 0xc0023d4028] [0xba70e0 0xba70e0] 0xc00280e2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:36:12.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:36:12.942: INFO: rc: 1 Mar 22 14:36:12.942: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c3a1b0 exit status 1 true [0xc0023d4038 0xc0023d4050 0xc0023d4068] [0xc0023d4038 0xc0023d4050 0xc0023d4068] [0xc0023d4048 0xc0023d4060] [0xba70e0 0xba70e0] 0xc00280e5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:36:22.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:36:23.044: INFO: rc: 1 Mar 22 14:36:23.044: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c3a810 exit status 1 true [0xc0023d4070 0xc0023d4088 0xc0023d40a0] [0xc0023d4070 0xc0023d4088 0xc0023d40a0] [0xc0023d4080 0xc0023d4098] [0xba70e0 0xba70e0] 0xc00280e900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:36:33.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:36:33.140: INFO: rc: 1 Mar 22 14:36:33.140: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d060f0 exit status 1 true [0xc002e12000 0xc002e12018 0xc002e12030] [0xc002e12000 0xc002e12018 0xc002e12030] [0xc002e12010 0xc002e12028] [0xba70e0 0xba70e0] 0xc0027a0660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 14:36:43.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-338 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 22 14:36:43.234: INFO: rc: 1 Mar 22 14:36:43.234: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Mar 22 14:36:43.234: INFO: Scaling statefulset ss to 0 Mar 22 14:36:43.243: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 22 14:36:43.245: INFO: Deleting all statefulset in ns statefulset-338 Mar 22 14:36:43.248: INFO: Scaling statefulset ss to 0 Mar 22 14:36:43.255: INFO: Waiting for statefulset status.replicas updated to 0 Mar 22 14:36:43.256: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:36:43.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-338" for this suite. Mar 22 14:36:49.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:36:49.381: INFO: namespace statefulset-338 deletion completed in 6.089872676s • [SLOW TEST:371.226 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:36:49.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Mar 22 14:36:49.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 22 14:36:49.583: INFO: stderr: "" Mar 22 14:36:49.583: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:36:49.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-209" for this suite. Mar 22 14:36:55.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:36:55.730: INFO: namespace kubectl-209 deletion completed in 6.14224953s • [SLOW TEST:6.348 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:36:55.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:37:00.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7213" for this suite. Mar 22 14:37:22.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:37:22.969: INFO: namespace replication-controller-7213 deletion completed in 22.103736169s • [SLOW TEST:27.240 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:37:22.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Mar 22 14:37:23.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7295' Mar 22 14:37:23.300: INFO: stderr: "" Mar 22 14:37:23.300: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 22 14:37:24.304: INFO: Selector matched 1 pods for map[app:redis] Mar 22 14:37:24.304: INFO: Found 0 / 1 Mar 22 14:37:25.305: INFO: Selector matched 1 pods for map[app:redis] Mar 22 14:37:25.305: INFO: Found 0 / 1 Mar 22 14:37:26.305: INFO: Selector matched 1 pods for map[app:redis] Mar 22 14:37:26.305: INFO: Found 1 / 1 Mar 22 14:37:26.305: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 22 14:37:26.308: INFO: Selector matched 1 pods for map[app:redis] Mar 22 14:37:26.308: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 22 14:37:26.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-nw6d6 --namespace=kubectl-7295 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 22 14:37:26.412: INFO: stderr: "" Mar 22 14:37:26.412: INFO: stdout: "pod/redis-master-nw6d6 patched\n" STEP: checking annotations Mar 22 14:37:26.415: INFO: Selector matched 1 pods for map[app:redis] Mar 22 14:37:26.415: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:37:26.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7295" for this suite. Mar 22 14:37:48.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:37:48.503: INFO: namespace kubectl-7295 deletion completed in 22.085760808s • [SLOW TEST:25.534 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 22 14:37:48.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 22 14:37:52.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-207a71cb-c8eb-4913-b79c-f5d88cd9beeb -c busybox-main-container --namespace=emptydir-1284 -- cat /usr/share/volumeshare/shareddata.txt' Mar 22 14:37:52.798: INFO: stderr: "I0322 14:37:52.727268 3795 log.go:172] (0xc000a48420) (0xc0005ea960) Create stream\nI0322 14:37:52.727353 3795 log.go:172] (0xc000a48420) (0xc0005ea960) Stream added, broadcasting: 1\nI0322 14:37:52.730640 3795 log.go:172] (0xc000a48420) Reply frame received for 1\nI0322 14:37:52.730674 3795 log.go:172] (0xc000a48420) (0xc0005ea000) Create stream\nI0322 14:37:52.730684 3795 log.go:172] (0xc000a48420) (0xc0005ea000) Stream added, broadcasting: 3\nI0322 14:37:52.731584 3795 log.go:172] (0xc000a48420) Reply frame received for 3\nI0322 14:37:52.731629 3795 log.go:172] (0xc000a48420) (0xc0005c41e0) Create stream\nI0322 14:37:52.731647 3795 log.go:172] (0xc000a48420) (0xc0005c41e0) Stream added, broadcasting: 5\nI0322 14:37:52.732693 3795 log.go:172] (0xc000a48420) Reply frame received for 5\nI0322 14:37:52.790540 3795 log.go:172] (0xc000a48420) Data frame received for 3\nI0322 14:37:52.790566 3795 log.go:172] (0xc0005ea000) (3) Data frame handling\nI0322 14:37:52.790576 3795 log.go:172] (0xc0005ea000) (3) Data frame sent\nI0322 14:37:52.790784 3795 log.go:172] (0xc000a48420) Data frame received for 5\nI0322 14:37:52.790815 3795 log.go:172] (0xc0005c41e0) (5) Data frame handling\nI0322 14:37:52.790831 3795 log.go:172] (0xc000a48420) Data frame received for 3\nI0322 14:37:52.790838 3795 log.go:172] (0xc0005ea000) (3) Data frame handling\nI0322 14:37:52.792901 3795 log.go:172] (0xc000a48420) Data frame received for 1\nI0322 14:37:52.792928 3795 log.go:172] (0xc0005ea960) (1) Data frame handling\nI0322 14:37:52.792947 3795 log.go:172] (0xc0005ea960) (1) Data frame sent\nI0322 14:37:52.792975 3795 log.go:172] (0xc000a48420) (0xc0005ea960) Stream removed, broadcasting: 1\nI0322 14:37:52.793015 3795 log.go:172] (0xc000a48420) Go away received\nI0322 14:37:52.793873 3795 log.go:172] (0xc000a48420) (0xc0005ea960) Stream removed, broadcasting: 1\nI0322 14:37:52.793917 3795 log.go:172] (0xc000a48420) (0xc0005ea000) Stream removed, broadcasting: 3\nI0322 14:37:52.793933 3795 log.go:172] (0xc000a48420) (0xc0005c41e0) Stream removed, broadcasting: 5\n" Mar 22 14:37:52.798: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 22 14:37:52.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1284" for this suite. Mar 22 14:37:58.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 22 14:37:58.905: INFO: namespace emptydir-1284 deletion completed in 6.102263606s • [SLOW TEST:10.401 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSMar 22 14:37:58.905: INFO: Running AfterSuite actions on all nodes Mar 22 14:37:58.905: INFO: Running AfterSuite actions on node 1 Mar 22 14:37:58.905: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6134.711 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS