I0107 12:56:24.608228 8 e2e.go:243] Starting e2e run "de7d5091-86d1-456b-9724-fdd4601f6236" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578401783 - Will randomize all specs Will run 215 of 4412 specs Jan 7 12:56:25.023: INFO: >>> kubeConfig: /root/.kube/config Jan 7 12:56:25.028: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 7 12:56:25.059: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 7 12:56:25.093: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 7 12:56:25.093: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 7 12:56:25.093: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 7 12:56:25.104: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 7 12:56:25.104: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 7 12:56:25.104: INFO: e2e test version: v1.15.7 Jan 7 12:56:25.105: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 12:56:25.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir Jan 7 12:56:25.840: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 7 12:56:25.861: INFO: Waiting up to 5m0s for pod "pod-ef8d312f-9acb-4875-b84f-77fcdd415944" in namespace "emptydir-2104" to be "success or failure" Jan 7 12:56:25.873: INFO: Pod "pod-ef8d312f-9acb-4875-b84f-77fcdd415944": Phase="Pending", Reason="", readiness=false. Elapsed: 11.9645ms Jan 7 12:56:27.883: INFO: Pod "pod-ef8d312f-9acb-4875-b84f-77fcdd415944": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021610747s Jan 7 12:56:29.901: INFO: Pod "pod-ef8d312f-9acb-4875-b84f-77fcdd415944": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040456313s Jan 7 12:56:31.910: INFO: Pod "pod-ef8d312f-9acb-4875-b84f-77fcdd415944": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049565086s Jan 7 12:56:33.927: INFO: Pod "pod-ef8d312f-9acb-4875-b84f-77fcdd415944": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066372086s Jan 7 12:56:35.938: INFO: Pod "pod-ef8d312f-9acb-4875-b84f-77fcdd415944": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076977466s STEP: Saw pod success Jan 7 12:56:35.938: INFO: Pod "pod-ef8d312f-9acb-4875-b84f-77fcdd415944" satisfied condition "success or failure" Jan 7 12:56:35.944: INFO: Trying to get logs from node iruya-node pod pod-ef8d312f-9acb-4875-b84f-77fcdd415944 container test-container: STEP: delete the pod Jan 7 12:56:36.061: INFO: Waiting for pod pod-ef8d312f-9acb-4875-b84f-77fcdd415944 to disappear Jan 7 12:56:36.083: INFO: Pod pod-ef8d312f-9acb-4875-b84f-77fcdd415944 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 12:56:36.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2104" for this suite. Jan 7 12:56:42.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 12:56:42.242: INFO: namespace emptydir-2104 deletion completed in 6.151123073s • [SLOW TEST:17.137 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 12:56:42.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 7 12:56:42.532: INFO: Waiting up to 5m0s for pod "pod-016361f0-665c-472c-b5f2-390a46902a93" in namespace "emptydir-32" to be "success or failure" Jan 7 12:56:42.548: INFO: Pod "pod-016361f0-665c-472c-b5f2-390a46902a93": Phase="Pending", Reason="", readiness=false. Elapsed: 15.145919ms Jan 7 12:56:44.583: INFO: Pod "pod-016361f0-665c-472c-b5f2-390a46902a93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049416587s Jan 7 12:56:46.606: INFO: Pod "pod-016361f0-665c-472c-b5f2-390a46902a93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072782007s Jan 7 12:56:48.623: INFO: Pod "pod-016361f0-665c-472c-b5f2-390a46902a93": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089829514s Jan 7 12:56:50.638: INFO: Pod "pod-016361f0-665c-472c-b5f2-390a46902a93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105259209s STEP: Saw pod success Jan 7 12:56:50.639: INFO: Pod "pod-016361f0-665c-472c-b5f2-390a46902a93" satisfied condition "success or failure" Jan 7 12:56:50.644: INFO: Trying to get logs from node iruya-node pod pod-016361f0-665c-472c-b5f2-390a46902a93 container test-container: STEP: delete the pod Jan 7 12:56:50.736: INFO: Waiting for pod pod-016361f0-665c-472c-b5f2-390a46902a93 to disappear Jan 7 12:56:50.748: INFO: Pod pod-016361f0-665c-472c-b5f2-390a46902a93 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 12:56:50.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-32" for this suite. Jan 7 12:56:57.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 12:56:57.240: INFO: namespace emptydir-32 deletion completed in 6.479705343s • [SLOW TEST:14.997 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 12:56:57.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 7 12:56:57.341: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 12:57:17.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7910" for this suite. Jan 7 12:57:39.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 12:57:39.598: INFO: namespace init-container-7910 deletion completed in 22.210875997s • [SLOW TEST:42.358 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 12:57:39.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 7 12:57:50.365: INFO: Successfully updated pod "pod-update-68dcd267-952b-447e-8f86-a64338454adb" STEP: verifying the updated pod is in kubernetes Jan 7 12:57:50.455: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 12:57:50.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3385" for this suite. Jan 7 12:58:12.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 12:58:12.639: INFO: namespace pods-3385 deletion completed in 22.156538343s • [SLOW TEST:33.040 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 12:58:12.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 7 12:58:12.716: INFO: Creating deployment "test-recreate-deployment" Jan 7 12:58:12.776: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 7 12:58:12.815: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 7 12:58:14.834: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 7 12:58:14.838: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998692, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998692, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 12:58:16.851: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998692, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998692, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 12:58:18.849: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998692, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998692, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 12:58:20.850: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998692, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998692, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998692, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 12:58:22.850: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 7 12:58:22.868: INFO: Updating deployment test-recreate-deployment Jan 7 12:58:22.868: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 7 12:58:23.194: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-2815,SelfLink:/apis/apps/v1/namespaces/deployment-2815/deployments/test-recreate-deployment,UID:1ba8c8bb-f791-420b-9ebb-b474c41833f6,ResourceVersion:19645647,Generation:2,CreationTimestamp:2020-01-07 12:58:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-07 12:58:23 +0000 UTC 2020-01-07 12:58:23 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-07 12:58:23 +0000 UTC 2020-01-07 12:58:12 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jan 7 12:58:23.223: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-2815,SelfLink:/apis/apps/v1/namespaces/deployment-2815/replicasets/test-recreate-deployment-5c8c9cc69d,UID:225054ee-2eb1-4810-923b-ff76e79b3020,ResourceVersion:19645646,Generation:1,CreationTimestamp:2020-01-07 12:58:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 1ba8c8bb-f791-420b-9ebb-b474c41833f6 0xc002db5777 0xc002db5778}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 7 12:58:23.223: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 7 12:58:23.223: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-2815,SelfLink:/apis/apps/v1/namespaces/deployment-2815/replicasets/test-recreate-deployment-6df85df6b9,UID:d58780f8-ad2d-4ad7-a534-810a18170825,ResourceVersion:19645636,Generation:2,CreationTimestamp:2020-01-07 12:58:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 1ba8c8bb-f791-420b-9ebb-b474c41833f6 0xc002db5847 0xc002db5848}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 7 12:58:23.227: INFO: Pod "test-recreate-deployment-5c8c9cc69d-65vsz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-65vsz,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-2815,SelfLink:/api/v1/namespaces/deployment-2815/pods/test-recreate-deployment-5c8c9cc69d-65vsz,UID:16a6fe9b-6437-4722-9044-9592fcf96111,ResourceVersion:19645648,Generation:0,CreationTimestamp:2020-01-07 12:58:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 225054ee-2eb1-4810-923b-ff76e79b3020 0xc002dea127 0xc002dea128}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-n5tjm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n5tjm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n5tjm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dea1a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dea1c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:58:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:58:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:58:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:58:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-07 12:58:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 12:58:23.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2815" for this suite. Jan 7 12:58:29.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 12:58:29.385: INFO: namespace deployment-2815 deletion completed in 6.151343382s • [SLOW TEST:16.745 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 12:58:29.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 7 12:58:29.496: INFO: Waiting up to 5m0s for pod "downward-api-e2152fde-8dd6-4346-9a72-b0fc45fae548" in namespace "downward-api-766" to be "success or failure" Jan 7 12:58:29.513: INFO: Pod "downward-api-e2152fde-8dd6-4346-9a72-b0fc45fae548": Phase="Pending", Reason="", readiness=false. Elapsed: 16.907294ms Jan 7 12:58:31.523: INFO: Pod "downward-api-e2152fde-8dd6-4346-9a72-b0fc45fae548": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026786453s Jan 7 12:58:33.541: INFO: Pod "downward-api-e2152fde-8dd6-4346-9a72-b0fc45fae548": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044078739s Jan 7 12:58:35.615: INFO: Pod "downward-api-e2152fde-8dd6-4346-9a72-b0fc45fae548": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118531141s Jan 7 12:58:37.647: INFO: Pod "downward-api-e2152fde-8dd6-4346-9a72-b0fc45fae548": Phase="Pending", Reason="", readiness=false. Elapsed: 8.150770837s Jan 7 12:58:39.655: INFO: Pod "downward-api-e2152fde-8dd6-4346-9a72-b0fc45fae548": Phase="Pending", Reason="", readiness=false. Elapsed: 10.157942792s Jan 7 12:58:41.664: INFO: Pod "downward-api-e2152fde-8dd6-4346-9a72-b0fc45fae548": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.166948458s STEP: Saw pod success Jan 7 12:58:41.664: INFO: Pod "downward-api-e2152fde-8dd6-4346-9a72-b0fc45fae548" satisfied condition "success or failure" Jan 7 12:58:41.668: INFO: Trying to get logs from node iruya-node pod downward-api-e2152fde-8dd6-4346-9a72-b0fc45fae548 container dapi-container: STEP: delete the pod Jan 7 12:58:41.814: INFO: Waiting for pod downward-api-e2152fde-8dd6-4346-9a72-b0fc45fae548 to disappear Jan 7 12:58:41.828: INFO: Pod downward-api-e2152fde-8dd6-4346-9a72-b0fc45fae548 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 12:58:41.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-766" for this suite. Jan 7 12:58:47.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 12:58:48.036: INFO: namespace downward-api-766 deletion completed in 6.201050285s • [SLOW TEST:18.651 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 12:58:48.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-ae707e1c-0d62-49cc-bc5d-4a6afb1e02b6 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 12:58:48.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8291" for this suite. Jan 7 12:58:54.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 12:58:54.376: INFO: namespace configmap-8291 deletion completed in 6.25415969s • [SLOW TEST:6.339 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 12:58:54.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Jan 7 12:58:54.524: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4633" to be "success or failure" Jan 7 12:58:54.533: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.103029ms Jan 7 12:58:56.548: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023938308s Jan 7 12:58:58.562: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037913096s Jan 7 12:59:00.576: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051447174s Jan 7 12:59:02.599: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074572242s Jan 7 12:59:04.630: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105954764s STEP: Saw pod success Jan 7 12:59:04.631: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 7 12:59:04.650: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 7 12:59:04.800: INFO: Waiting for pod pod-host-path-test to disappear Jan 7 12:59:04.896: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 12:59:04.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4633" for this suite. Jan 7 12:59:10.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 12:59:11.113: INFO: namespace hostpath-4633 deletion completed in 6.209629109s • [SLOW TEST:16.736 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 12:59:11.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:00:13.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2198" for this suite. Jan 7 13:00:19.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:00:19.195: INFO: namespace container-runtime-2198 deletion completed in 6.157808801s • [SLOW TEST:68.079 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:00:19.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jan 7 13:00:19.318: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Jan 7 13:00:19.962: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 7 13:00:22.394: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998819, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998819, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998820, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998819, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 13:00:24.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998819, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998819, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998820, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998819, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 13:00:26.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998819, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998819, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998820, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998819, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 13:00:28.402: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998819, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998819, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998820, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998819, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 13:00:30.401: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998819, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998819, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998820, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713998819, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 13:00:36.894: INFO: Waited 4.482953686s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:00:37.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3622" for this suite. Jan 7 13:00:43.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:00:43.850: INFO: namespace aggregator-3622 deletion completed in 6.249750942s • [SLOW TEST:24.653 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:00:43.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 7 13:00:43.991: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c49cf39-d492-4a13-a180-5bfac629a2f6" in namespace "downward-api-739" to be "success or failure" Jan 7 13:00:44.007: INFO: Pod "downwardapi-volume-6c49cf39-d492-4a13-a180-5bfac629a2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.302588ms Jan 7 13:00:46.020: INFO: Pod "downwardapi-volume-6c49cf39-d492-4a13-a180-5bfac629a2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028430715s Jan 7 13:00:48.032: INFO: Pod "downwardapi-volume-6c49cf39-d492-4a13-a180-5bfac629a2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040609532s Jan 7 13:00:50.041: INFO: Pod "downwardapi-volume-6c49cf39-d492-4a13-a180-5bfac629a2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048918332s Jan 7 13:00:52.053: INFO: Pod "downwardapi-volume-6c49cf39-d492-4a13-a180-5bfac629a2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061380095s Jan 7 13:00:54.060: INFO: Pod "downwardapi-volume-6c49cf39-d492-4a13-a180-5bfac629a2f6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.068150269s Jan 7 13:00:56.067: INFO: Pod "downwardapi-volume-6c49cf39-d492-4a13-a180-5bfac629a2f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.075792129s STEP: Saw pod success Jan 7 13:00:56.068: INFO: Pod "downwardapi-volume-6c49cf39-d492-4a13-a180-5bfac629a2f6" satisfied condition "success or failure" Jan 7 13:00:56.073: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6c49cf39-d492-4a13-a180-5bfac629a2f6 container client-container: STEP: delete the pod Jan 7 13:00:56.128: INFO: Waiting for pod downwardapi-volume-6c49cf39-d492-4a13-a180-5bfac629a2f6 to disappear Jan 7 13:00:56.181: INFO: Pod downwardapi-volume-6c49cf39-d492-4a13-a180-5bfac629a2f6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:00:56.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-739" for this suite. Jan 7 13:01:02.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:01:02.318: INFO: namespace downward-api-739 deletion completed in 6.131364623s • [SLOW TEST:18.467 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:01:02.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-8c5dc924-cf3d-4738-9642-347c62eb3d8d STEP: Creating configMap with name cm-test-opt-upd-4999d11e-7a6b-4d20-a657-f7a4e3441a0a STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8c5dc924-cf3d-4738-9642-347c62eb3d8d STEP: Updating configmap cm-test-opt-upd-4999d11e-7a6b-4d20-a657-f7a4e3441a0a STEP: Creating configMap with name cm-test-opt-create-e6e58924-4692-4391-beb3-c87601e8b9bb STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:01:20.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2645" for this suite. Jan 7 13:01:42.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:01:43.086: INFO: namespace projected-2645 deletion completed in 22.116617455s • [SLOW TEST:40.768 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:01:43.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 7 13:01:43.198: INFO: Waiting up to 5m0s for pod "pod-ca9547b4-edff-46e1-96f2-95bddc5a7def" in namespace "emptydir-2197" to be "success or failure" Jan 7 13:01:43.207: INFO: Pod "pod-ca9547b4-edff-46e1-96f2-95bddc5a7def": Phase="Pending", Reason="", readiness=false. Elapsed: 9.090003ms Jan 7 13:01:45.218: INFO: Pod "pod-ca9547b4-edff-46e1-96f2-95bddc5a7def": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019283739s Jan 7 13:01:47.225: INFO: Pod "pod-ca9547b4-edff-46e1-96f2-95bddc5a7def": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026817585s Jan 7 13:01:49.235: INFO: Pod "pod-ca9547b4-edff-46e1-96f2-95bddc5a7def": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037008228s Jan 7 13:01:51.245: INFO: Pod "pod-ca9547b4-edff-46e1-96f2-95bddc5a7def": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046235954s Jan 7 13:01:53.257: INFO: Pod "pod-ca9547b4-edff-46e1-96f2-95bddc5a7def": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058956267s STEP: Saw pod success Jan 7 13:01:53.258: INFO: Pod "pod-ca9547b4-edff-46e1-96f2-95bddc5a7def" satisfied condition "success or failure" Jan 7 13:01:53.262: INFO: Trying to get logs from node iruya-node pod pod-ca9547b4-edff-46e1-96f2-95bddc5a7def container test-container: STEP: delete the pod Jan 7 13:01:53.334: INFO: Waiting for pod pod-ca9547b4-edff-46e1-96f2-95bddc5a7def to disappear Jan 7 13:01:53.346: INFO: Pod pod-ca9547b4-edff-46e1-96f2-95bddc5a7def no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:01:53.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2197" for this suite. Jan 7 13:01:59.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:01:59.564: INFO: namespace emptydir-2197 deletion completed in 6.211382348s • [SLOW TEST:16.477 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:01:59.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9164 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 7 13:01:59.642: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 7 13:02:33.827: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-9164 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 7 13:02:33.827: INFO: >>> kubeConfig: /root/.kube/config I0107 13:02:33.960020 8 log.go:172] (0xc00023c9a0) (0xc0011a06e0) Create stream I0107 13:02:33.960282 8 log.go:172] (0xc00023c9a0) (0xc0011a06e0) Stream added, broadcasting: 1 I0107 13:02:33.972341 8 log.go:172] (0xc00023c9a0) Reply frame received for 1 I0107 13:02:33.972488 8 log.go:172] (0xc00023c9a0) (0xc0014d81e0) Create stream I0107 13:02:33.972515 8 log.go:172] (0xc00023c9a0) (0xc0014d81e0) Stream added, broadcasting: 3 I0107 13:02:33.974384 8 log.go:172] (0xc00023c9a0) Reply frame received for 3 I0107 13:02:33.974418 8 log.go:172] (0xc00023c9a0) (0xc001f9e000) Create stream I0107 13:02:33.974445 8 log.go:172] (0xc00023c9a0) (0xc001f9e000) Stream added, broadcasting: 5 I0107 13:02:33.976145 8 log.go:172] (0xc00023c9a0) Reply frame received for 5 I0107 13:02:34.467653 8 log.go:172] (0xc00023c9a0) Data frame received for 3 I0107 13:02:34.467949 8 log.go:172] (0xc0014d81e0) (3) Data frame handling I0107 13:02:34.468016 8 log.go:172] (0xc0014d81e0) (3) Data frame sent I0107 13:02:34.688516 8 log.go:172] (0xc00023c9a0) (0xc0014d81e0) Stream removed, broadcasting: 3 I0107 13:02:34.688974 8 log.go:172] (0xc00023c9a0) Data frame received for 1 I0107 13:02:34.689007 8 log.go:172] (0xc0011a06e0) (1) Data frame handling I0107 13:02:34.689044 8 log.go:172] (0xc0011a06e0) (1) Data frame sent I0107 13:02:34.689067 8 log.go:172] (0xc00023c9a0) (0xc0011a06e0) Stream removed, broadcasting: 1 I0107 13:02:34.689395 8 log.go:172] (0xc00023c9a0) (0xc001f9e000) Stream removed, broadcasting: 5 I0107 13:02:34.689461 8 log.go:172] (0xc00023c9a0) Go away received I0107 13:02:34.689992 8 log.go:172] (0xc00023c9a0) (0xc0011a06e0) Stream removed, broadcasting: 1 I0107 13:02:34.690060 8 log.go:172] (0xc00023c9a0) (0xc0014d81e0) Stream removed, broadcasting: 3 I0107 13:02:34.690100 8 log.go:172] (0xc00023c9a0) (0xc001f9e000) Stream removed, broadcasting: 5 Jan 7 13:02:34.690: INFO: Waiting for endpoints: map[] Jan 7 13:02:34.708: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-9164 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 7 13:02:34.708: INFO: >>> kubeConfig: /root/.kube/config I0107 13:02:34.861559 8 log.go:172] (0xc0009ddd90) (0xc0014d86e0) Create stream I0107 13:02:34.862364 8 log.go:172] (0xc0009ddd90) (0xc0014d86e0) Stream added, broadcasting: 1 I0107 13:02:34.884423 8 log.go:172] (0xc0009ddd90) Reply frame received for 1 I0107 13:02:34.884722 8 log.go:172] (0xc0009ddd90) (0xc0011a0820) Create stream I0107 13:02:34.884755 8 log.go:172] (0xc0009ddd90) (0xc0011a0820) Stream added, broadcasting: 3 I0107 13:02:34.889303 8 log.go:172] (0xc0009ddd90) Reply frame received for 3 I0107 13:02:34.889356 8 log.go:172] (0xc0009ddd90) (0xc0014d8780) Create stream I0107 13:02:34.889367 8 log.go:172] (0xc0009ddd90) (0xc0014d8780) Stream added, broadcasting: 5 I0107 13:02:34.892471 8 log.go:172] (0xc0009ddd90) Reply frame received for 5 I0107 13:02:35.052179 8 log.go:172] (0xc0009ddd90) Data frame received for 3 I0107 13:02:35.052339 8 log.go:172] (0xc0011a0820) (3) Data frame handling I0107 13:02:35.052375 8 log.go:172] (0xc0011a0820) (3) Data frame sent I0107 13:02:35.155815 8 log.go:172] (0xc0009ddd90) Data frame received for 1 I0107 13:02:35.155929 8 log.go:172] (0xc0009ddd90) (0xc0014d8780) Stream removed, broadcasting: 5 I0107 13:02:35.155999 8 log.go:172] (0xc0014d86e0) (1) Data frame handling I0107 13:02:35.156028 8 log.go:172] (0xc0014d86e0) (1) Data frame sent I0107 13:02:35.156064 8 log.go:172] (0xc0009ddd90) (0xc0011a0820) Stream removed, broadcasting: 3 I0107 13:02:35.156145 8 log.go:172] (0xc0009ddd90) (0xc0014d86e0) Stream removed, broadcasting: 1 I0107 13:02:35.156175 8 log.go:172] (0xc0009ddd90) Go away received I0107 13:02:35.156380 8 log.go:172] (0xc0009ddd90) (0xc0014d86e0) Stream removed, broadcasting: 1 I0107 13:02:35.156390 8 log.go:172] (0xc0009ddd90) (0xc0011a0820) Stream removed, broadcasting: 3 I0107 13:02:35.156399 8 log.go:172] (0xc0009ddd90) (0xc0014d8780) Stream removed, broadcasting: 5 Jan 7 13:02:35.156: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:02:35.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9164" for this suite. Jan 7 13:03:03.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:03:03.296: INFO: namespace pod-network-test-9164 deletion completed in 28.132032787s • [SLOW TEST:63.730 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:03:03.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 7 13:03:03.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 7 13:03:03.534: INFO: stderr: "" Jan 7 13:03:03.534: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:03:03.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1186" for this suite. Jan 7 13:03:09.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:03:09.729: INFO: namespace kubectl-1186 deletion completed in 6.18878688s • [SLOW TEST:6.433 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:03:09.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0107 13:03:39.983369 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 7 13:03:39.983: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:03:39.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4375" for this suite. Jan 7 13:03:50.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:03:51.227: INFO: namespace gc-4375 deletion completed in 11.230337855s • [SLOW TEST:41.498 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:03:51.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:03:56.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5208" for this suite. Jan 7 13:04:03.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:04:03.255: INFO: namespace watch-5208 deletion completed in 6.275052707s • [SLOW TEST:12.027 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:04:03.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 7 13:04:03.346: INFO: Waiting up to 5m0s for pod "pod-924d91df-c3fc-49c5-9d38-ec1a8d2519f5" in namespace "emptydir-7161" to be "success or failure" Jan 7 13:04:03.362: INFO: Pod "pod-924d91df-c3fc-49c5-9d38-ec1a8d2519f5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.541914ms Jan 7 13:04:05.394: INFO: Pod "pod-924d91df-c3fc-49c5-9d38-ec1a8d2519f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046887993s Jan 7 13:04:07.405: INFO: Pod "pod-924d91df-c3fc-49c5-9d38-ec1a8d2519f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058506222s Jan 7 13:04:09.413: INFO: Pod "pod-924d91df-c3fc-49c5-9d38-ec1a8d2519f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066349185s Jan 7 13:04:11.430: INFO: Pod "pod-924d91df-c3fc-49c5-9d38-ec1a8d2519f5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083295066s Jan 7 13:04:13.441: INFO: Pod "pod-924d91df-c3fc-49c5-9d38-ec1a8d2519f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.094423518s STEP: Saw pod success Jan 7 13:04:13.441: INFO: Pod "pod-924d91df-c3fc-49c5-9d38-ec1a8d2519f5" satisfied condition "success or failure" Jan 7 13:04:13.447: INFO: Trying to get logs from node iruya-node pod pod-924d91df-c3fc-49c5-9d38-ec1a8d2519f5 container test-container: STEP: delete the pod Jan 7 13:04:13.507: INFO: Waiting for pod pod-924d91df-c3fc-49c5-9d38-ec1a8d2519f5 to disappear Jan 7 13:04:13.554: INFO: Pod pod-924d91df-c3fc-49c5-9d38-ec1a8d2519f5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:04:13.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7161" for this suite. Jan 7 13:04:19.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:04:19.799: INFO: namespace emptydir-7161 deletion completed in 6.235520022s • [SLOW TEST:16.544 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:04:19.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-ded212f3-a91f-4867-aa26-2a0909826667 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-ded212f3-a91f-4867-aa26-2a0909826667 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:05:41.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6496" for this suite. Jan 7 13:06:03.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:06:03.794: INFO: namespace projected-6496 deletion completed in 22.22279766s • [SLOW TEST:103.994 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:06:03.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-982/configmap-test-43cfeeb9-e161-45d9-b3eb-488a36a7b5ab STEP: Creating a pod to test consume configMaps Jan 7 13:06:03.957: INFO: Waiting up to 5m0s for pod "pod-configmaps-f27e0ada-284b-4406-be12-81d48d656724" in namespace "configmap-982" to be "success or failure" Jan 7 13:06:03.966: INFO: Pod "pod-configmaps-f27e0ada-284b-4406-be12-81d48d656724": Phase="Pending", Reason="", readiness=false. Elapsed: 8.221383ms Jan 7 13:06:05.976: INFO: Pod "pod-configmaps-f27e0ada-284b-4406-be12-81d48d656724": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018994181s Jan 7 13:06:07.984: INFO: Pod "pod-configmaps-f27e0ada-284b-4406-be12-81d48d656724": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026599026s Jan 7 13:06:10.000: INFO: Pod "pod-configmaps-f27e0ada-284b-4406-be12-81d48d656724": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04239243s Jan 7 13:06:12.015: INFO: Pod "pod-configmaps-f27e0ada-284b-4406-be12-81d48d656724": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057624309s Jan 7 13:06:14.030: INFO: Pod "pod-configmaps-f27e0ada-284b-4406-be12-81d48d656724": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072275582s STEP: Saw pod success Jan 7 13:06:14.030: INFO: Pod "pod-configmaps-f27e0ada-284b-4406-be12-81d48d656724" satisfied condition "success or failure" Jan 7 13:06:14.039: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f27e0ada-284b-4406-be12-81d48d656724 container env-test: STEP: delete the pod Jan 7 13:06:14.428: INFO: Waiting for pod pod-configmaps-f27e0ada-284b-4406-be12-81d48d656724 to disappear Jan 7 13:06:14.435: INFO: Pod pod-configmaps-f27e0ada-284b-4406-be12-81d48d656724 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:06:14.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-982" for this suite. Jan 7 13:06:20.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:06:20.632: INFO: namespace configmap-982 deletion completed in 6.191202042s • [SLOW TEST:16.835 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:06:20.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7966.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7966.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7966.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7966.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7966.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7966.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 7 13:06:36.824: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7966/dns-test-f579e57c-34cc-4f30-85a6-0b6bc5e5c1f5: the server could not find the requested resource (get pods dns-test-f579e57c-34cc-4f30-85a6-0b6bc5e5c1f5) Jan 7 13:06:36.833: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7966/dns-test-f579e57c-34cc-4f30-85a6-0b6bc5e5c1f5: the server could not find the requested resource (get pods dns-test-f579e57c-34cc-4f30-85a6-0b6bc5e5c1f5) Jan 7 13:06:36.840: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-7966.svc.cluster.local from pod dns-7966/dns-test-f579e57c-34cc-4f30-85a6-0b6bc5e5c1f5: the server could not find the requested resource (get pods dns-test-f579e57c-34cc-4f30-85a6-0b6bc5e5c1f5) Jan 7 13:06:36.849: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-7966/dns-test-f579e57c-34cc-4f30-85a6-0b6bc5e5c1f5: the server could not find the requested resource (get pods dns-test-f579e57c-34cc-4f30-85a6-0b6bc5e5c1f5) Jan 7 13:06:36.856: INFO: Unable to read jessie_udp@PodARecord from pod dns-7966/dns-test-f579e57c-34cc-4f30-85a6-0b6bc5e5c1f5: the server could not find the requested resource (get pods dns-test-f579e57c-34cc-4f30-85a6-0b6bc5e5c1f5) Jan 7 13:06:36.864: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7966/dns-test-f579e57c-34cc-4f30-85a6-0b6bc5e5c1f5: the server could not find the requested resource (get pods dns-test-f579e57c-34cc-4f30-85a6-0b6bc5e5c1f5) Jan 7 13:06:36.864: INFO: Lookups using dns-7966/dns-test-f579e57c-34cc-4f30-85a6-0b6bc5e5c1f5 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-7966.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 7 13:06:41.956: INFO: DNS probes using dns-7966/dns-test-f579e57c-34cc-4f30-85a6-0b6bc5e5c1f5 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:06:42.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7966" for this suite. Jan 7 13:06:50.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:06:50.343: INFO: namespace dns-7966 deletion completed in 8.287538338s • [SLOW TEST:29.711 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:06:50.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0107 13:07:04.005640 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 7 13:07:04.005: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:07:04.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7222" for this suite. Jan 7 13:07:27.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:07:27.764: INFO: namespace gc-7222 deletion completed in 23.751922738s • [SLOW TEST:37.420 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:07:27.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 7 13:07:38.163: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:07:38.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9792" for this suite. Jan 7 13:07:44.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:07:44.427: INFO: namespace container-runtime-9792 deletion completed in 6.198266306s • [SLOW TEST:16.663 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:07:44.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-7a8d46f9-c4b2-43ca-be48-f7f60d9fcdfb STEP: Creating a pod to test consume secrets Jan 7 13:07:44.728: INFO: Waiting up to 5m0s for pod "pod-secrets-87f20962-7d84-497e-854c-c39c8bb2c9a8" in namespace "secrets-7982" to be "success or failure" Jan 7 13:07:44.747: INFO: Pod "pod-secrets-87f20962-7d84-497e-854c-c39c8bb2c9a8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.167394ms Jan 7 13:07:46.757: INFO: Pod "pod-secrets-87f20962-7d84-497e-854c-c39c8bb2c9a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028358223s Jan 7 13:07:48.763: INFO: Pod "pod-secrets-87f20962-7d84-497e-854c-c39c8bb2c9a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034857645s Jan 7 13:07:50.804: INFO: Pod "pod-secrets-87f20962-7d84-497e-854c-c39c8bb2c9a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075515769s Jan 7 13:07:52.816: INFO: Pod "pod-secrets-87f20962-7d84-497e-854c-c39c8bb2c9a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087628434s STEP: Saw pod success Jan 7 13:07:52.817: INFO: Pod "pod-secrets-87f20962-7d84-497e-854c-c39c8bb2c9a8" satisfied condition "success or failure" Jan 7 13:07:52.822: INFO: Trying to get logs from node iruya-node pod pod-secrets-87f20962-7d84-497e-854c-c39c8bb2c9a8 container secret-volume-test: STEP: delete the pod Jan 7 13:07:52.985: INFO: Waiting for pod pod-secrets-87f20962-7d84-497e-854c-c39c8bb2c9a8 to disappear Jan 7 13:07:53.016: INFO: Pod pod-secrets-87f20962-7d84-497e-854c-c39c8bb2c9a8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:07:53.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7982" for this suite. Jan 7 13:07:59.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:07:59.154: INFO: namespace secrets-7982 deletion completed in 6.132294117s • [SLOW TEST:14.726 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:07:59.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-726c6066-59a1-4d46-90a7-ea181ee6639d STEP: Creating a pod to test consume secrets Jan 7 13:07:59.386: INFO: Waiting up to 5m0s for pod "pod-secrets-e831440c-6ef2-443b-a4bc-19b93f16258f" in namespace "secrets-1077" to be "success or failure" Jan 7 13:07:59.397: INFO: Pod "pod-secrets-e831440c-6ef2-443b-a4bc-19b93f16258f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.314689ms Jan 7 13:08:01.411: INFO: Pod "pod-secrets-e831440c-6ef2-443b-a4bc-19b93f16258f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024748712s Jan 7 13:08:03.449: INFO: Pod "pod-secrets-e831440c-6ef2-443b-a4bc-19b93f16258f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062625789s Jan 7 13:08:05.456: INFO: Pod "pod-secrets-e831440c-6ef2-443b-a4bc-19b93f16258f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069773221s Jan 7 13:08:07.466: INFO: Pod "pod-secrets-e831440c-6ef2-443b-a4bc-19b93f16258f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079285398s Jan 7 13:08:09.474: INFO: Pod "pod-secrets-e831440c-6ef2-443b-a4bc-19b93f16258f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087977629s STEP: Saw pod success Jan 7 13:08:09.475: INFO: Pod "pod-secrets-e831440c-6ef2-443b-a4bc-19b93f16258f" satisfied condition "success or failure" Jan 7 13:08:09.479: INFO: Trying to get logs from node iruya-node pod pod-secrets-e831440c-6ef2-443b-a4bc-19b93f16258f container secret-volume-test: STEP: delete the pod Jan 7 13:08:09.641: INFO: Waiting for pod pod-secrets-e831440c-6ef2-443b-a4bc-19b93f16258f to disappear Jan 7 13:08:09.661: INFO: Pod pod-secrets-e831440c-6ef2-443b-a4bc-19b93f16258f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:08:09.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1077" for this suite. Jan 7 13:08:15.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:08:15.866: INFO: namespace secrets-1077 deletion completed in 6.178337664s • [SLOW TEST:16.711 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:08:15.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-fbcf3f2e-d2a3-406b-b0c7-3389b35e6e72 STEP: Creating a pod to test consume secrets Jan 7 13:08:15.975: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-56a07e2e-c7ad-4d6f-9465-728deb7e8e62" in namespace "projected-620" to be "success or failure" Jan 7 13:08:15.988: INFO: Pod "pod-projected-secrets-56a07e2e-c7ad-4d6f-9465-728deb7e8e62": Phase="Pending", Reason="", readiness=false. Elapsed: 12.792182ms Jan 7 13:08:17.996: INFO: Pod "pod-projected-secrets-56a07e2e-c7ad-4d6f-9465-728deb7e8e62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020379345s Jan 7 13:08:20.002: INFO: Pod "pod-projected-secrets-56a07e2e-c7ad-4d6f-9465-728deb7e8e62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026435599s Jan 7 13:08:22.016: INFO: Pod "pod-projected-secrets-56a07e2e-c7ad-4d6f-9465-728deb7e8e62": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041094177s Jan 7 13:08:24.031: INFO: Pod "pod-projected-secrets-56a07e2e-c7ad-4d6f-9465-728deb7e8e62": Phase="Running", Reason="", readiness=true. Elapsed: 8.055705549s Jan 7 13:08:26.040: INFO: Pod "pod-projected-secrets-56a07e2e-c7ad-4d6f-9465-728deb7e8e62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06507982s STEP: Saw pod success Jan 7 13:08:26.041: INFO: Pod "pod-projected-secrets-56a07e2e-c7ad-4d6f-9465-728deb7e8e62" satisfied condition "success or failure" Jan 7 13:08:26.044: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-56a07e2e-c7ad-4d6f-9465-728deb7e8e62 container projected-secret-volume-test: STEP: delete the pod Jan 7 13:08:26.181: INFO: Waiting for pod pod-projected-secrets-56a07e2e-c7ad-4d6f-9465-728deb7e8e62 to disappear Jan 7 13:08:26.225: INFO: Pod pod-projected-secrets-56a07e2e-c7ad-4d6f-9465-728deb7e8e62 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:08:26.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-620" for this suite. Jan 7 13:08:32.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:08:32.368: INFO: namespace projected-620 deletion completed in 6.138903728s • [SLOW TEST:16.502 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:08:32.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 7 13:08:32.533: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6399977c-b5c8-4cf2-91d7-fe1124eafe90" in namespace "projected-8673" to be "success or failure" Jan 7 13:08:32.545: INFO: Pod "downwardapi-volume-6399977c-b5c8-4cf2-91d7-fe1124eafe90": Phase="Pending", Reason="", readiness=false. Elapsed: 12.162086ms Jan 7 13:08:34.562: INFO: Pod "downwardapi-volume-6399977c-b5c8-4cf2-91d7-fe1124eafe90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028861501s Jan 7 13:08:36.577: INFO: Pod "downwardapi-volume-6399977c-b5c8-4cf2-91d7-fe1124eafe90": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044161179s Jan 7 13:08:38.599: INFO: Pod "downwardapi-volume-6399977c-b5c8-4cf2-91d7-fe1124eafe90": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065737591s Jan 7 13:08:40.617: INFO: Pod "downwardapi-volume-6399977c-b5c8-4cf2-91d7-fe1124eafe90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083704239s STEP: Saw pod success Jan 7 13:08:40.617: INFO: Pod "downwardapi-volume-6399977c-b5c8-4cf2-91d7-fe1124eafe90" satisfied condition "success or failure" Jan 7 13:08:40.626: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6399977c-b5c8-4cf2-91d7-fe1124eafe90 container client-container: STEP: delete the pod Jan 7 13:08:40.732: INFO: Waiting for pod downwardapi-volume-6399977c-b5c8-4cf2-91d7-fe1124eafe90 to disappear Jan 7 13:08:40.738: INFO: Pod downwardapi-volume-6399977c-b5c8-4cf2-91d7-fe1124eafe90 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:08:40.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8673" for this suite. Jan 7 13:08:46.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:08:46.886: INFO: namespace projected-8673 deletion completed in 6.143057401s • [SLOW TEST:14.518 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:08:46.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 7 13:08:47.099: INFO: Waiting up to 5m0s for pod "downward-api-d4621e3b-b1c1-4724-80f3-e936b2a85bd2" in namespace "downward-api-1454" to be "success or failure" Jan 7 13:08:47.114: INFO: Pod "downward-api-d4621e3b-b1c1-4724-80f3-e936b2a85bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.974519ms Jan 7 13:08:49.136: INFO: Pod "downward-api-d4621e3b-b1c1-4724-80f3-e936b2a85bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0366904s Jan 7 13:08:51.199: INFO: Pod "downward-api-d4621e3b-b1c1-4724-80f3-e936b2a85bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09988764s Jan 7 13:08:53.209: INFO: Pod "downward-api-d4621e3b-b1c1-4724-80f3-e936b2a85bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109934065s Jan 7 13:08:55.231: INFO: Pod "downward-api-d4621e3b-b1c1-4724-80f3-e936b2a85bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.131136527s Jan 7 13:08:57.240: INFO: Pod "downward-api-d4621e3b-b1c1-4724-80f3-e936b2a85bd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.140437514s STEP: Saw pod success Jan 7 13:08:57.240: INFO: Pod "downward-api-d4621e3b-b1c1-4724-80f3-e936b2a85bd2" satisfied condition "success or failure" Jan 7 13:08:57.244: INFO: Trying to get logs from node iruya-node pod downward-api-d4621e3b-b1c1-4724-80f3-e936b2a85bd2 container dapi-container: STEP: delete the pod Jan 7 13:08:57.311: INFO: Waiting for pod downward-api-d4621e3b-b1c1-4724-80f3-e936b2a85bd2 to disappear Jan 7 13:08:57.356: INFO: Pod downward-api-d4621e3b-b1c1-4724-80f3-e936b2a85bd2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:08:57.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1454" for this suite. Jan 7 13:09:03.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:09:03.651: INFO: namespace downward-api-1454 deletion completed in 6.285339917s • [SLOW TEST:16.764 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:09:03.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 7 13:09:03.712: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 7 13:09:06.908: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:09:07.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9153" for this suite. Jan 7 13:09:19.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:09:19.515: INFO: namespace replication-controller-9153 deletion completed in 12.210083247s • [SLOW TEST:15.864 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:09:19.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-eb417cf1-c41b-417c-b263-1002ed26c450 in namespace container-probe-8118 Jan 7 13:09:27.662: INFO: Started pod liveness-eb417cf1-c41b-417c-b263-1002ed26c450 in namespace container-probe-8118 STEP: checking the pod's current state and verifying that restartCount is present Jan 7 13:09:27.669: INFO: Initial restart count of pod liveness-eb417cf1-c41b-417c-b263-1002ed26c450 is 0 Jan 7 13:09:47.852: INFO: Restart count of pod container-probe-8118/liveness-eb417cf1-c41b-417c-b263-1002ed26c450 is now 1 (20.182956041s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:09:48.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8118" for this suite. Jan 7 13:09:54.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:09:54.831: INFO: namespace container-probe-8118 deletion completed in 6.204251492s • [SLOW TEST:35.314 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:09:54.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 7 13:09:54.940: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-712,SelfLink:/api/v1/namespaces/watch-712/configmaps/e2e-watch-test-configmap-a,UID:7e27c054-09b1-4186-a308-6dbf63753402,ResourceVersion:19647609,Generation:0,CreationTimestamp:2020-01-07 13:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 7 13:09:54.941: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-712,SelfLink:/api/v1/namespaces/watch-712/configmaps/e2e-watch-test-configmap-a,UID:7e27c054-09b1-4186-a308-6dbf63753402,ResourceVersion:19647609,Generation:0,CreationTimestamp:2020-01-07 13:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 7 13:10:04.954: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-712,SelfLink:/api/v1/namespaces/watch-712/configmaps/e2e-watch-test-configmap-a,UID:7e27c054-09b1-4186-a308-6dbf63753402,ResourceVersion:19647623,Generation:0,CreationTimestamp:2020-01-07 13:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 7 13:10:04.954: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-712,SelfLink:/api/v1/namespaces/watch-712/configmaps/e2e-watch-test-configmap-a,UID:7e27c054-09b1-4186-a308-6dbf63753402,ResourceVersion:19647623,Generation:0,CreationTimestamp:2020-01-07 13:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 7 13:10:14.981: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-712,SelfLink:/api/v1/namespaces/watch-712/configmaps/e2e-watch-test-configmap-a,UID:7e27c054-09b1-4186-a308-6dbf63753402,ResourceVersion:19647636,Generation:0,CreationTimestamp:2020-01-07 13:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 7 13:10:14.982: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-712,SelfLink:/api/v1/namespaces/watch-712/configmaps/e2e-watch-test-configmap-a,UID:7e27c054-09b1-4186-a308-6dbf63753402,ResourceVersion:19647636,Generation:0,CreationTimestamp:2020-01-07 13:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 7 13:10:25.009: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-712,SelfLink:/api/v1/namespaces/watch-712/configmaps/e2e-watch-test-configmap-a,UID:7e27c054-09b1-4186-a308-6dbf63753402,ResourceVersion:19647650,Generation:0,CreationTimestamp:2020-01-07 13:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 7 13:10:25.010: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-712,SelfLink:/api/v1/namespaces/watch-712/configmaps/e2e-watch-test-configmap-a,UID:7e27c054-09b1-4186-a308-6dbf63753402,ResourceVersion:19647650,Generation:0,CreationTimestamp:2020-01-07 13:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 7 13:10:35.030: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-712,SelfLink:/api/v1/namespaces/watch-712/configmaps/e2e-watch-test-configmap-b,UID:77b5c5b9-c499-4549-b2d2-e6a2f7e64758,ResourceVersion:19647664,Generation:0,CreationTimestamp:2020-01-07 13:10:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 7 13:10:35.030: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-712,SelfLink:/api/v1/namespaces/watch-712/configmaps/e2e-watch-test-configmap-b,UID:77b5c5b9-c499-4549-b2d2-e6a2f7e64758,ResourceVersion:19647664,Generation:0,CreationTimestamp:2020-01-07 13:10:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 7 13:10:45.047: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-712,SelfLink:/api/v1/namespaces/watch-712/configmaps/e2e-watch-test-configmap-b,UID:77b5c5b9-c499-4549-b2d2-e6a2f7e64758,ResourceVersion:19647680,Generation:0,CreationTimestamp:2020-01-07 13:10:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 7 13:10:45.048: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-712,SelfLink:/api/v1/namespaces/watch-712/configmaps/e2e-watch-test-configmap-b,UID:77b5c5b9-c499-4549-b2d2-e6a2f7e64758,ResourceVersion:19647680,Generation:0,CreationTimestamp:2020-01-07 13:10:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:10:55.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-712" for this suite. Jan 7 13:11:01.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:11:01.233: INFO: namespace watch-712 deletion completed in 6.167059743s • [SLOW TEST:66.402 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:11:01.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6240 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6240 STEP: Creating statefulset with conflicting port in namespace statefulset-6240 STEP: Waiting until pod test-pod will start running in namespace statefulset-6240 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6240 Jan 7 13:11:13.487: INFO: Observed stateful pod in namespace: statefulset-6240, name: ss-0, uid: b239d08e-6606-4a9e-94b3-57c0fb4c2071, status phase: Pending. Waiting for statefulset controller to delete. Jan 7 13:11:16.497: INFO: Observed stateful pod in namespace: statefulset-6240, name: ss-0, uid: b239d08e-6606-4a9e-94b3-57c0fb4c2071, status phase: Failed. Waiting for statefulset controller to delete. Jan 7 13:11:16.514: INFO: Observed stateful pod in namespace: statefulset-6240, name: ss-0, uid: b239d08e-6606-4a9e-94b3-57c0fb4c2071, status phase: Failed. Waiting for statefulset controller to delete. Jan 7 13:11:16.598: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6240 STEP: Removing pod with conflicting port in namespace statefulset-6240 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6240 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 7 13:11:29.104: INFO: Deleting all statefulset in ns statefulset-6240 Jan 7 13:11:29.109: INFO: Scaling statefulset ss to 0 Jan 7 13:11:39.165: INFO: Waiting for statefulset status.replicas updated to 0 Jan 7 13:11:39.169: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:11:39.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6240" for this suite. Jan 7 13:11:47.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:11:47.384: INFO: namespace statefulset-6240 deletion completed in 8.162110974s • [SLOW TEST:46.151 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:11:47.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 7 13:11:47.646: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66c08f35-3cc1-41a7-9d56-f114abf1de44" in namespace "downward-api-6054" to be "success or failure" Jan 7 13:11:47.787: INFO: Pod "downwardapi-volume-66c08f35-3cc1-41a7-9d56-f114abf1de44": Phase="Pending", Reason="", readiness=false. Elapsed: 140.595252ms Jan 7 13:11:49.804: INFO: Pod "downwardapi-volume-66c08f35-3cc1-41a7-9d56-f114abf1de44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156983181s Jan 7 13:11:51.816: INFO: Pod "downwardapi-volume-66c08f35-3cc1-41a7-9d56-f114abf1de44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16944509s Jan 7 13:11:53.828: INFO: Pod "downwardapi-volume-66c08f35-3cc1-41a7-9d56-f114abf1de44": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181448783s Jan 7 13:11:55.837: INFO: Pod "downwardapi-volume-66c08f35-3cc1-41a7-9d56-f114abf1de44": Phase="Pending", Reason="", readiness=false. Elapsed: 8.190288901s Jan 7 13:11:57.844: INFO: Pod "downwardapi-volume-66c08f35-3cc1-41a7-9d56-f114abf1de44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.197499555s STEP: Saw pod success Jan 7 13:11:57.844: INFO: Pod "downwardapi-volume-66c08f35-3cc1-41a7-9d56-f114abf1de44" satisfied condition "success or failure" Jan 7 13:11:57.848: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-66c08f35-3cc1-41a7-9d56-f114abf1de44 container client-container: STEP: delete the pod Jan 7 13:11:58.052: INFO: Waiting for pod downwardapi-volume-66c08f35-3cc1-41a7-9d56-f114abf1de44 to disappear Jan 7 13:11:58.076: INFO: Pod downwardapi-volume-66c08f35-3cc1-41a7-9d56-f114abf1de44 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:11:58.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6054" for this suite. Jan 7 13:12:04.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:12:04.280: INFO: namespace downward-api-6054 deletion completed in 6.196079401s • [SLOW TEST:16.895 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:12:04.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-39392426-7312-476a-8c82-7d93299bd16f STEP: Creating a pod to test consume secrets Jan 7 13:12:04.451: INFO: Waiting up to 5m0s for pod "pod-secrets-506ddc3e-87cb-4865-991b-7596712e22a5" in namespace "secrets-751" to be "success or failure" Jan 7 13:12:04.459: INFO: Pod "pod-secrets-506ddc3e-87cb-4865-991b-7596712e22a5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.32724ms Jan 7 13:12:06.472: INFO: Pod "pod-secrets-506ddc3e-87cb-4865-991b-7596712e22a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020493495s Jan 7 13:12:08.482: INFO: Pod "pod-secrets-506ddc3e-87cb-4865-991b-7596712e22a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030373112s Jan 7 13:12:10.495: INFO: Pod "pod-secrets-506ddc3e-87cb-4865-991b-7596712e22a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043094398s Jan 7 13:12:12.512: INFO: Pod "pod-secrets-506ddc3e-87cb-4865-991b-7596712e22a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060163019s STEP: Saw pod success Jan 7 13:12:12.512: INFO: Pod "pod-secrets-506ddc3e-87cb-4865-991b-7596712e22a5" satisfied condition "success or failure" Jan 7 13:12:12.526: INFO: Trying to get logs from node iruya-node pod pod-secrets-506ddc3e-87cb-4865-991b-7596712e22a5 container secret-volume-test: STEP: delete the pod Jan 7 13:12:12.650: INFO: Waiting for pod pod-secrets-506ddc3e-87cb-4865-991b-7596712e22a5 to disappear Jan 7 13:12:12.656: INFO: Pod pod-secrets-506ddc3e-87cb-4865-991b-7596712e22a5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:12:12.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-751" for this suite. Jan 7 13:12:18.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:12:18.949: INFO: namespace secrets-751 deletion completed in 6.285693952s STEP: Destroying namespace "secret-namespace-1372" for this suite. Jan 7 13:12:24.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:12:25.105: INFO: namespace secret-namespace-1372 deletion completed in 6.155957662s • [SLOW TEST:20.824 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:12:25.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 7 13:12:25.411: INFO: Number of nodes with available pods: 0 Jan 7 13:12:25.411: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:12:26.430: INFO: Number of nodes with available pods: 0 Jan 7 13:12:26.430: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:12:27.479: INFO: Number of nodes with available pods: 0 Jan 7 13:12:27.479: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:12:28.432: INFO: Number of nodes with available pods: 0 Jan 7 13:12:28.432: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:12:29.424: INFO: Number of nodes with available pods: 0 Jan 7 13:12:29.424: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:12:31.931: INFO: Number of nodes with available pods: 0 Jan 7 13:12:31.931: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:12:32.423: INFO: Number of nodes with available pods: 0 Jan 7 13:12:32.423: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:12:33.423: INFO: Number of nodes with available pods: 0 Jan 7 13:12:33.423: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:12:34.432: INFO: Number of nodes with available pods: 0 Jan 7 13:12:34.432: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:12:35.422: INFO: Number of nodes with available pods: 0 Jan 7 13:12:35.422: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:12:36.479: INFO: Number of nodes with available pods: 2 Jan 7 13:12:36.479: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 7 13:12:36.620: INFO: Number of nodes with available pods: 1 Jan 7 13:12:36.621: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 7 13:12:37.976: INFO: Number of nodes with available pods: 1 Jan 7 13:12:37.976: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 7 13:12:38.780: INFO: Number of nodes with available pods: 1 Jan 7 13:12:38.781: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 7 13:12:40.494: INFO: Number of nodes with available pods: 1 Jan 7 13:12:40.495: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 7 13:12:41.797: INFO: Number of nodes with available pods: 1 Jan 7 13:12:41.798: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 7 13:12:42.664: INFO: Number of nodes with available pods: 1 Jan 7 13:12:42.664: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 7 13:12:43.639: INFO: Number of nodes with available pods: 1 Jan 7 13:12:43.639: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 7 13:12:44.893: INFO: Number of nodes with available pods: 1 Jan 7 13:12:44.894: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 7 13:12:45.849: INFO: Number of nodes with available pods: 1 Jan 7 13:12:45.849: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 7 13:12:46.646: INFO: Number of nodes with available pods: 1 Jan 7 13:12:46.646: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 7 13:12:47.678: INFO: Number of nodes with available pods: 2 Jan 7 13:12:47.678: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7586, will wait for the garbage collector to delete the pods Jan 7 13:12:47.763: INFO: Deleting DaemonSet.extensions daemon-set took: 12.20964ms Jan 7 13:12:48.064: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.879676ms Jan 7 13:13:06.607: INFO: Number of nodes with available pods: 0 Jan 7 13:13:06.607: INFO: Number of running nodes: 0, number of available pods: 0 Jan 7 13:13:06.617: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7586/daemonsets","resourceVersion":"19648115"},"items":null} Jan 7 13:13:06.624: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7586/pods","resourceVersion":"19648115"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:13:06.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7586" for this suite. Jan 7 13:13:12.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:13:12.768: INFO: namespace daemonsets-7586 deletion completed in 6.127628484s • [SLOW TEST:47.663 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:13:12.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-724038a2-4526-43ef-bbae-b07d03998b89 STEP: Creating a pod to test consume secrets Jan 7 13:13:12.887: INFO: Waiting up to 5m0s for pod "pod-secrets-bb932fd9-17c3-47c4-b514-f9d2c3fe7cb7" in namespace "secrets-7015" to be "success or failure" Jan 7 13:13:12.896: INFO: Pod "pod-secrets-bb932fd9-17c3-47c4-b514-f9d2c3fe7cb7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.944892ms Jan 7 13:13:14.909: INFO: Pod "pod-secrets-bb932fd9-17c3-47c4-b514-f9d2c3fe7cb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021866911s Jan 7 13:13:16.919: INFO: Pod "pod-secrets-bb932fd9-17c3-47c4-b514-f9d2c3fe7cb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032181648s Jan 7 13:13:18.933: INFO: Pod "pod-secrets-bb932fd9-17c3-47c4-b514-f9d2c3fe7cb7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045940525s Jan 7 13:13:20.946: INFO: Pod "pod-secrets-bb932fd9-17c3-47c4-b514-f9d2c3fe7cb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058479038s STEP: Saw pod success Jan 7 13:13:20.946: INFO: Pod "pod-secrets-bb932fd9-17c3-47c4-b514-f9d2c3fe7cb7" satisfied condition "success or failure" Jan 7 13:13:20.951: INFO: Trying to get logs from node iruya-node pod pod-secrets-bb932fd9-17c3-47c4-b514-f9d2c3fe7cb7 container secret-env-test: STEP: delete the pod Jan 7 13:13:21.139: INFO: Waiting for pod pod-secrets-bb932fd9-17c3-47c4-b514-f9d2c3fe7cb7 to disappear Jan 7 13:13:21.163: INFO: Pod pod-secrets-bb932fd9-17c3-47c4-b514-f9d2c3fe7cb7 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:13:21.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7015" for this suite. Jan 7 13:13:27.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:13:27.400: INFO: namespace secrets-7015 deletion completed in 6.228061987s • [SLOW TEST:14.632 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:13:27.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-4ch8 STEP: Creating a pod to test atomic-volume-subpath Jan 7 13:13:27.658: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4ch8" in namespace "subpath-1343" to be "success or failure" Jan 7 13:13:27.686: INFO: Pod "pod-subpath-test-secret-4ch8": Phase="Pending", Reason="", readiness=false. Elapsed: 27.901732ms Jan 7 13:13:29.721: INFO: Pod "pod-subpath-test-secret-4ch8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063028344s Jan 7 13:13:31.727: INFO: Pod "pod-subpath-test-secret-4ch8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068957198s Jan 7 13:13:33.737: INFO: Pod "pod-subpath-test-secret-4ch8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07887382s Jan 7 13:13:35.743: INFO: Pod "pod-subpath-test-secret-4ch8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085540172s Jan 7 13:13:37.753: INFO: Pod "pod-subpath-test-secret-4ch8": Phase="Running", Reason="", readiness=true. Elapsed: 10.095296572s Jan 7 13:13:39.798: INFO: Pod "pod-subpath-test-secret-4ch8": Phase="Running", Reason="", readiness=true. Elapsed: 12.140152329s Jan 7 13:13:41.808: INFO: Pod "pod-subpath-test-secret-4ch8": Phase="Running", Reason="", readiness=true. Elapsed: 14.149769369s Jan 7 13:13:43.817: INFO: Pod "pod-subpath-test-secret-4ch8": Phase="Running", Reason="", readiness=true. Elapsed: 16.1587984s Jan 7 13:13:45.831: INFO: Pod "pod-subpath-test-secret-4ch8": Phase="Running", Reason="", readiness=true. Elapsed: 18.172886862s Jan 7 13:13:47.839: INFO: Pod "pod-subpath-test-secret-4ch8": Phase="Running", Reason="", readiness=true. Elapsed: 20.18065361s Jan 7 13:13:49.873: INFO: Pod "pod-subpath-test-secret-4ch8": Phase="Running", Reason="", readiness=true. Elapsed: 22.214891628s Jan 7 13:13:51.887: INFO: Pod "pod-subpath-test-secret-4ch8": Phase="Running", Reason="", readiness=true. Elapsed: 24.228614558s Jan 7 13:13:53.904: INFO: Pod "pod-subpath-test-secret-4ch8": Phase="Running", Reason="", readiness=true. Elapsed: 26.246029659s Jan 7 13:13:55.916: INFO: Pod "pod-subpath-test-secret-4ch8": Phase="Running", Reason="", readiness=true. Elapsed: 28.25806315s Jan 7 13:13:57.924: INFO: Pod "pod-subpath-test-secret-4ch8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.265759661s STEP: Saw pod success Jan 7 13:13:57.924: INFO: Pod "pod-subpath-test-secret-4ch8" satisfied condition "success or failure" Jan 7 13:13:57.927: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-4ch8 container test-container-subpath-secret-4ch8: STEP: delete the pod Jan 7 13:13:57.984: INFO: Waiting for pod pod-subpath-test-secret-4ch8 to disappear Jan 7 13:13:57.998: INFO: Pod pod-subpath-test-secret-4ch8 no longer exists STEP: Deleting pod pod-subpath-test-secret-4ch8 Jan 7 13:13:57.999: INFO: Deleting pod "pod-subpath-test-secret-4ch8" in namespace "subpath-1343" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:13:58.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1343" for this suite. Jan 7 13:14:04.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:14:04.175: INFO: namespace subpath-1343 deletion completed in 6.141049001s • [SLOW TEST:36.774 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:14:04.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 7 13:14:04.358: INFO: Waiting up to 5m0s for pod "downwardapi-volume-373a2d2d-0443-47b9-8656-695083871c7e" in namespace "projected-5934" to be "success or failure" Jan 7 13:14:04.367: INFO: Pod "downwardapi-volume-373a2d2d-0443-47b9-8656-695083871c7e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.150274ms Jan 7 13:14:06.435: INFO: Pod "downwardapi-volume-373a2d2d-0443-47b9-8656-695083871c7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076428672s Jan 7 13:14:08.450: INFO: Pod "downwardapi-volume-373a2d2d-0443-47b9-8656-695083871c7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091991754s Jan 7 13:14:10.463: INFO: Pod "downwardapi-volume-373a2d2d-0443-47b9-8656-695083871c7e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104756155s Jan 7 13:14:12.480: INFO: Pod "downwardapi-volume-373a2d2d-0443-47b9-8656-695083871c7e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121690408s Jan 7 13:14:14.492: INFO: Pod "downwardapi-volume-373a2d2d-0443-47b9-8656-695083871c7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.133734538s STEP: Saw pod success Jan 7 13:14:14.492: INFO: Pod "downwardapi-volume-373a2d2d-0443-47b9-8656-695083871c7e" satisfied condition "success or failure" Jan 7 13:14:14.498: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-373a2d2d-0443-47b9-8656-695083871c7e container client-container: STEP: delete the pod Jan 7 13:14:14.642: INFO: Waiting for pod downwardapi-volume-373a2d2d-0443-47b9-8656-695083871c7e to disappear Jan 7 13:14:14.716: INFO: Pod downwardapi-volume-373a2d2d-0443-47b9-8656-695083871c7e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:14:14.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5934" for this suite. Jan 7 13:14:20.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:14:20.942: INFO: namespace projected-5934 deletion completed in 6.218726522s • [SLOW TEST:16.767 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:14:20.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 7 13:14:21.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-797' Jan 7 13:14:22.847: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 7 13:14:22.847: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jan 7 13:14:22.923: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-gg4rh] Jan 7 13:14:22.924: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-gg4rh" in namespace "kubectl-797" to be "running and ready" Jan 7 13:14:22.937: INFO: Pod "e2e-test-nginx-rc-gg4rh": Phase="Pending", Reason="", readiness=false. Elapsed: 13.302786ms Jan 7 13:14:24.951: INFO: Pod "e2e-test-nginx-rc-gg4rh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027291018s Jan 7 13:14:26.960: INFO: Pod "e2e-test-nginx-rc-gg4rh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036091319s Jan 7 13:14:28.970: INFO: Pod "e2e-test-nginx-rc-gg4rh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04647493s Jan 7 13:14:30.979: INFO: Pod "e2e-test-nginx-rc-gg4rh": Phase="Running", Reason="", readiness=true. Elapsed: 8.054817045s Jan 7 13:14:30.979: INFO: Pod "e2e-test-nginx-rc-gg4rh" satisfied condition "running and ready" Jan 7 13:14:30.979: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-gg4rh] Jan 7 13:14:30.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-797' Jan 7 13:14:31.200: INFO: stderr: "" Jan 7 13:14:31.200: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Jan 7 13:14:31.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-797' Jan 7 13:14:31.645: INFO: stderr: "" Jan 7 13:14:31.646: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:14:31.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-797" for this suite. Jan 7 13:14:53.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:14:53.930: INFO: namespace kubectl-797 deletion completed in 22.254225784s • [SLOW TEST:32.987 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:14:53.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 7 13:15:02.537: INFO: Waiting up to 5m0s for pod "client-envvars-91ead026-45fc-4f90-9991-fe26fc9670b4" in namespace "pods-54" to be "success or failure" Jan 7 13:15:02.593: INFO: Pod "client-envvars-91ead026-45fc-4f90-9991-fe26fc9670b4": Phase="Pending", Reason="", readiness=false. Elapsed: 56.206961ms Jan 7 13:15:04.691: INFO: Pod "client-envvars-91ead026-45fc-4f90-9991-fe26fc9670b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153402006s Jan 7 13:15:06.733: INFO: Pod "client-envvars-91ead026-45fc-4f90-9991-fe26fc9670b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196055857s Jan 7 13:15:08.762: INFO: Pod "client-envvars-91ead026-45fc-4f90-9991-fe26fc9670b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224869516s Jan 7 13:15:10.786: INFO: Pod "client-envvars-91ead026-45fc-4f90-9991-fe26fc9670b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.248469409s STEP: Saw pod success Jan 7 13:15:10.786: INFO: Pod "client-envvars-91ead026-45fc-4f90-9991-fe26fc9670b4" satisfied condition "success or failure" Jan 7 13:15:10.794: INFO: Trying to get logs from node iruya-node pod client-envvars-91ead026-45fc-4f90-9991-fe26fc9670b4 container env3cont: STEP: delete the pod Jan 7 13:15:10.989: INFO: Waiting for pod client-envvars-91ead026-45fc-4f90-9991-fe26fc9670b4 to disappear Jan 7 13:15:10.998: INFO: Pod client-envvars-91ead026-45fc-4f90-9991-fe26fc9670b4 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:15:10.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-54" for this suite. Jan 7 13:15:53.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:15:53.271: INFO: namespace pods-54 deletion completed in 42.265982606s • [SLOW TEST:59.340 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:15:53.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 7 13:16:19.457: INFO: Container started at 2020-01-07 13:16:00 +0000 UTC, pod became ready at 2020-01-07 13:16:18 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:16:19.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7566" for this suite. Jan 7 13:16:43.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:16:43.607: INFO: namespace container-probe-7566 deletion completed in 24.143245858s • [SLOW TEST:50.336 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:16:43.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 7 13:16:45.105: INFO: Pod name wrapped-volume-race-5aa072f1-991c-4184-8412-411482dc1484: Found 0 pods out of 5 Jan 7 13:16:50.134: INFO: Pod name wrapped-volume-race-5aa072f1-991c-4184-8412-411482dc1484: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5aa072f1-991c-4184-8412-411482dc1484 in namespace emptydir-wrapper-8793, will wait for the garbage collector to delete the pods Jan 7 13:17:20.286: INFO: Deleting ReplicationController wrapped-volume-race-5aa072f1-991c-4184-8412-411482dc1484 took: 43.259493ms Jan 7 13:17:20.688: INFO: Terminating ReplicationController wrapped-volume-race-5aa072f1-991c-4184-8412-411482dc1484 pods took: 401.723339ms STEP: Creating RC which spawns configmap-volume pods Jan 7 13:18:07.036: INFO: Pod name wrapped-volume-race-37a49a07-7b64-4396-94eb-db64a98e54b7: Found 0 pods out of 5 Jan 7 13:18:12.051: INFO: Pod name wrapped-volume-race-37a49a07-7b64-4396-94eb-db64a98e54b7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-37a49a07-7b64-4396-94eb-db64a98e54b7 in namespace emptydir-wrapper-8793, will wait for the garbage collector to delete the pods Jan 7 13:18:46.226: INFO: Deleting ReplicationController wrapped-volume-race-37a49a07-7b64-4396-94eb-db64a98e54b7 took: 26.548925ms Jan 7 13:18:46.628: INFO: Terminating ReplicationController wrapped-volume-race-37a49a07-7b64-4396-94eb-db64a98e54b7 pods took: 401.360156ms STEP: Creating RC which spawns configmap-volume pods Jan 7 13:19:37.749: INFO: Pod name wrapped-volume-race-8e0fe60a-2a55-4822-ac55-5b48d7ff459b: Found 0 pods out of 5 Jan 7 13:19:42.764: INFO: Pod name wrapped-volume-race-8e0fe60a-2a55-4822-ac55-5b48d7ff459b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8e0fe60a-2a55-4822-ac55-5b48d7ff459b in namespace emptydir-wrapper-8793, will wait for the garbage collector to delete the pods Jan 7 13:20:16.902: INFO: Deleting ReplicationController wrapped-volume-race-8e0fe60a-2a55-4822-ac55-5b48d7ff459b took: 19.173452ms Jan 7 13:20:17.303: INFO: Terminating ReplicationController wrapped-volume-race-8e0fe60a-2a55-4822-ac55-5b48d7ff459b pods took: 400.900657ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:21:07.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8793" for this suite. Jan 7 13:21:17.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:21:17.768: INFO: namespace emptydir-wrapper-8793 deletion completed in 10.232345247s • [SLOW TEST:274.160 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:21:17.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 7 13:21:17.982: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3309,SelfLink:/api/v1/namespaces/watch-3309/configmaps/e2e-watch-test-resource-version,UID:93f2c720-5bd3-40fb-8954-5c502aa0ee17,ResourceVersion:19649822,Generation:0,CreationTimestamp:2020-01-07 13:21:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 7 13:21:17.982: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3309,SelfLink:/api/v1/namespaces/watch-3309/configmaps/e2e-watch-test-resource-version,UID:93f2c720-5bd3-40fb-8954-5c502aa0ee17,ResourceVersion:19649823,Generation:0,CreationTimestamp:2020-01-07 13:21:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:21:17.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3309" for this suite. Jan 7 13:21:24.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:21:24.115: INFO: namespace watch-3309 deletion completed in 6.129680712s • [SLOW TEST:6.347 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:21:24.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jan 7 13:21:38.323: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jan 7 13:21:48.541: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:21:48.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-374" for this suite. Jan 7 13:21:54.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:21:54.762: INFO: namespace pods-374 deletion completed in 6.192172433s • [SLOW TEST:30.646 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:21:54.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Jan 7 13:21:54.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jan 7 13:21:55.082: INFO: stderr: "" Jan 7 13:21:55.082: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:21:55.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9026" for this suite. Jan 7 13:22:01.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:22:01.265: INFO: namespace kubectl-9026 deletion completed in 6.170105497s • [SLOW TEST:6.502 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:22:01.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0107 13:22:04.028197 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 7 13:22:04.028: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:22:04.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3314" for this suite. Jan 7 13:22:10.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:22:10.980: INFO: namespace gc-3314 deletion completed in 6.948551747s • [SLOW TEST:9.715 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:22:10.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 7 13:22:11.030: INFO: Waiting up to 5m0s for pod "downward-api-0b834291-0bea-4178-b0dc-34bf635b2f91" in namespace "downward-api-5727" to be "success or failure" Jan 7 13:22:11.059: INFO: Pod "downward-api-0b834291-0bea-4178-b0dc-34bf635b2f91": Phase="Pending", Reason="", readiness=false. Elapsed: 29.284757ms Jan 7 13:22:13.067: INFO: Pod "downward-api-0b834291-0bea-4178-b0dc-34bf635b2f91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037616798s Jan 7 13:22:15.107: INFO: Pod "downward-api-0b834291-0bea-4178-b0dc-34bf635b2f91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076880478s Jan 7 13:22:17.116: INFO: Pod "downward-api-0b834291-0bea-4178-b0dc-34bf635b2f91": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085803776s Jan 7 13:22:19.132: INFO: Pod "downward-api-0b834291-0bea-4178-b0dc-34bf635b2f91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101996765s STEP: Saw pod success Jan 7 13:22:19.132: INFO: Pod "downward-api-0b834291-0bea-4178-b0dc-34bf635b2f91" satisfied condition "success or failure" Jan 7 13:22:19.143: INFO: Trying to get logs from node iruya-node pod downward-api-0b834291-0bea-4178-b0dc-34bf635b2f91 container dapi-container: STEP: delete the pod Jan 7 13:22:19.373: INFO: Waiting for pod downward-api-0b834291-0bea-4178-b0dc-34bf635b2f91 to disappear Jan 7 13:22:19.385: INFO: Pod downward-api-0b834291-0bea-4178-b0dc-34bf635b2f91 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:22:19.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5727" for this suite. Jan 7 13:22:25.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:22:25.598: INFO: namespace downward-api-5727 deletion completed in 6.182266802s • [SLOW TEST:14.617 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:22:25.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-a447fca1-9e50-494f-8a8e-29f50c9a82b9 STEP: Creating a pod to test consume configMaps Jan 7 13:22:25.786: INFO: Waiting up to 5m0s for pod "pod-configmaps-af44b94f-51ac-4cc3-bfb3-2170a43f259a" in namespace "configmap-3570" to be "success or failure" Jan 7 13:22:25.933: INFO: Pod "pod-configmaps-af44b94f-51ac-4cc3-bfb3-2170a43f259a": Phase="Pending", Reason="", readiness=false. Elapsed: 147.1448ms Jan 7 13:22:27.945: INFO: Pod "pod-configmaps-af44b94f-51ac-4cc3-bfb3-2170a43f259a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159280476s Jan 7 13:22:29.952: INFO: Pod "pod-configmaps-af44b94f-51ac-4cc3-bfb3-2170a43f259a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166649633s Jan 7 13:22:31.967: INFO: Pod "pod-configmaps-af44b94f-51ac-4cc3-bfb3-2170a43f259a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.180687022s Jan 7 13:22:33.978: INFO: Pod "pod-configmaps-af44b94f-51ac-4cc3-bfb3-2170a43f259a": Phase="Running", Reason="", readiness=true. Elapsed: 8.192439079s Jan 7 13:22:35.987: INFO: Pod "pod-configmaps-af44b94f-51ac-4cc3-bfb3-2170a43f259a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.2014965s STEP: Saw pod success Jan 7 13:22:35.988: INFO: Pod "pod-configmaps-af44b94f-51ac-4cc3-bfb3-2170a43f259a" satisfied condition "success or failure" Jan 7 13:22:35.991: INFO: Trying to get logs from node iruya-node pod pod-configmaps-af44b94f-51ac-4cc3-bfb3-2170a43f259a container configmap-volume-test: STEP: delete the pod Jan 7 13:22:36.117: INFO: Waiting for pod pod-configmaps-af44b94f-51ac-4cc3-bfb3-2170a43f259a to disappear Jan 7 13:22:36.122: INFO: Pod pod-configmaps-af44b94f-51ac-4cc3-bfb3-2170a43f259a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:22:36.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3570" for this suite. Jan 7 13:22:42.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:22:42.754: INFO: namespace configmap-3570 deletion completed in 6.625702271s • [SLOW TEST:17.156 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:22:42.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 7 13:22:42.912: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b111fa46-76e7-4417-95e3-32d670910067" in namespace "projected-3542" to be "success or failure" Jan 7 13:22:42.926: INFO: Pod "downwardapi-volume-b111fa46-76e7-4417-95e3-32d670910067": Phase="Pending", Reason="", readiness=false. Elapsed: 14.119891ms Jan 7 13:22:44.936: INFO: Pod "downwardapi-volume-b111fa46-76e7-4417-95e3-32d670910067": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023649352s Jan 7 13:22:46.950: INFO: Pod "downwardapi-volume-b111fa46-76e7-4417-95e3-32d670910067": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037578047s Jan 7 13:22:48.962: INFO: Pod "downwardapi-volume-b111fa46-76e7-4417-95e3-32d670910067": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049425694s Jan 7 13:22:50.978: INFO: Pod "downwardapi-volume-b111fa46-76e7-4417-95e3-32d670910067": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06611802s STEP: Saw pod success Jan 7 13:22:50.979: INFO: Pod "downwardapi-volume-b111fa46-76e7-4417-95e3-32d670910067" satisfied condition "success or failure" Jan 7 13:22:50.983: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b111fa46-76e7-4417-95e3-32d670910067 container client-container: STEP: delete the pod Jan 7 13:22:51.112: INFO: Waiting for pod downwardapi-volume-b111fa46-76e7-4417-95e3-32d670910067 to disappear Jan 7 13:22:51.118: INFO: Pod downwardapi-volume-b111fa46-76e7-4417-95e3-32d670910067 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:22:51.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3542" for this suite. Jan 7 13:22:57.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:22:57.293: INFO: namespace projected-3542 deletion completed in 6.16887776s • [SLOW TEST:14.537 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:22:57.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 7 13:22:57.508: INFO: Waiting up to 5m0s for pod "pod-dd459e6e-aef8-439f-aee6-06d427b54dca" in namespace "emptydir-7736" to be "success or failure" Jan 7 13:22:57.534: INFO: Pod "pod-dd459e6e-aef8-439f-aee6-06d427b54dca": Phase="Pending", Reason="", readiness=false. Elapsed: 24.912545ms Jan 7 13:22:59.546: INFO: Pod "pod-dd459e6e-aef8-439f-aee6-06d427b54dca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037136165s Jan 7 13:23:01.629: INFO: Pod "pod-dd459e6e-aef8-439f-aee6-06d427b54dca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12047908s Jan 7 13:23:03.639: INFO: Pod "pod-dd459e6e-aef8-439f-aee6-06d427b54dca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130368025s Jan 7 13:23:05.651: INFO: Pod "pod-dd459e6e-aef8-439f-aee6-06d427b54dca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.142604047s STEP: Saw pod success Jan 7 13:23:05.652: INFO: Pod "pod-dd459e6e-aef8-439f-aee6-06d427b54dca" satisfied condition "success or failure" Jan 7 13:23:05.658: INFO: Trying to get logs from node iruya-node pod pod-dd459e6e-aef8-439f-aee6-06d427b54dca container test-container: STEP: delete the pod Jan 7 13:23:05.740: INFO: Waiting for pod pod-dd459e6e-aef8-439f-aee6-06d427b54dca to disappear Jan 7 13:23:05.754: INFO: Pod pod-dd459e6e-aef8-439f-aee6-06d427b54dca no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:23:05.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7736" for this suite. Jan 7 13:23:11.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:23:12.104: INFO: namespace emptydir-7736 deletion completed in 6.330389603s • [SLOW TEST:14.810 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:23:12.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Jan 7 13:23:12.208: INFO: Waiting up to 5m0s for pod "client-containers-c6b9c2c6-b965-4b58-a903-60a8944ef473" in namespace "containers-3594" to be "success or failure" Jan 7 13:23:12.226: INFO: Pod "client-containers-c6b9c2c6-b965-4b58-a903-60a8944ef473": Phase="Pending", Reason="", readiness=false. Elapsed: 17.375991ms Jan 7 13:23:14.239: INFO: Pod "client-containers-c6b9c2c6-b965-4b58-a903-60a8944ef473": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030754791s Jan 7 13:23:16.246: INFO: Pod "client-containers-c6b9c2c6-b965-4b58-a903-60a8944ef473": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037817374s Jan 7 13:23:18.255: INFO: Pod "client-containers-c6b9c2c6-b965-4b58-a903-60a8944ef473": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046952788s Jan 7 13:23:20.262: INFO: Pod "client-containers-c6b9c2c6-b965-4b58-a903-60a8944ef473": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053376952s STEP: Saw pod success Jan 7 13:23:20.262: INFO: Pod "client-containers-c6b9c2c6-b965-4b58-a903-60a8944ef473" satisfied condition "success or failure" Jan 7 13:23:20.266: INFO: Trying to get logs from node iruya-node pod client-containers-c6b9c2c6-b965-4b58-a903-60a8944ef473 container test-container: STEP: delete the pod Jan 7 13:23:20.346: INFO: Waiting for pod client-containers-c6b9c2c6-b965-4b58-a903-60a8944ef473 to disappear Jan 7 13:23:20.360: INFO: Pod client-containers-c6b9c2c6-b965-4b58-a903-60a8944ef473 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:23:20.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3594" for this suite. Jan 7 13:23:26.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:23:26.553: INFO: namespace containers-3594 deletion completed in 6.186422462s • [SLOW TEST:14.450 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:23:26.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 7 13:23:26.765: INFO: Waiting up to 5m0s for pod "pod-ccffdf6e-3a16-4633-a63c-0ab7a6a97435" in namespace "emptydir-3132" to be "success or failure" Jan 7 13:23:26.792: INFO: Pod "pod-ccffdf6e-3a16-4633-a63c-0ab7a6a97435": Phase="Pending", Reason="", readiness=false. Elapsed: 26.936313ms Jan 7 13:23:28.803: INFO: Pod "pod-ccffdf6e-3a16-4633-a63c-0ab7a6a97435": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037360922s Jan 7 13:23:30.909: INFO: Pod "pod-ccffdf6e-3a16-4633-a63c-0ab7a6a97435": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143074998s Jan 7 13:23:32.917: INFO: Pod "pod-ccffdf6e-3a16-4633-a63c-0ab7a6a97435": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151829009s Jan 7 13:23:34.932: INFO: Pod "pod-ccffdf6e-3a16-4633-a63c-0ab7a6a97435": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.166096202s STEP: Saw pod success Jan 7 13:23:34.932: INFO: Pod "pod-ccffdf6e-3a16-4633-a63c-0ab7a6a97435" satisfied condition "success or failure" Jan 7 13:23:34.937: INFO: Trying to get logs from node iruya-node pod pod-ccffdf6e-3a16-4633-a63c-0ab7a6a97435 container test-container: STEP: delete the pod Jan 7 13:23:35.081: INFO: Waiting for pod pod-ccffdf6e-3a16-4633-a63c-0ab7a6a97435 to disappear Jan 7 13:23:35.091: INFO: Pod pod-ccffdf6e-3a16-4633-a63c-0ab7a6a97435 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:23:35.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3132" for this suite. Jan 7 13:23:41.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:23:41.241: INFO: namespace emptydir-3132 deletion completed in 6.144596177s • [SLOW TEST:14.687 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:23:41.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 7 13:23:41.337: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 7 13:23:41.349: INFO: Waiting for terminating namespaces to be deleted... Jan 7 13:23:41.353: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 7 13:23:41.367: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 7 13:23:41.367: INFO: Container kube-proxy ready: true, restart count 0 Jan 7 13:23:41.367: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 7 13:23:41.367: INFO: Container weave ready: true, restart count 0 Jan 7 13:23:41.367: INFO: Container weave-npc ready: true, restart count 0 Jan 7 13:23:41.367: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 7 13:23:41.383: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 7 13:23:41.383: INFO: Container kube-scheduler ready: true, restart count 12 Jan 7 13:23:41.383: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 7 13:23:41.383: INFO: Container coredns ready: true, restart count 0 Jan 7 13:23:41.383: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 7 13:23:41.383: INFO: Container etcd ready: true, restart count 0 Jan 7 13:23:41.383: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 7 13:23:41.383: INFO: Container weave ready: true, restart count 0 Jan 7 13:23:41.383: INFO: Container weave-npc ready: true, restart count 0 Jan 7 13:23:41.383: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 7 13:23:41.383: INFO: Container coredns ready: true, restart count 0 Jan 7 13:23:41.383: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 7 13:23:41.383: INFO: Container kube-controller-manager ready: true, restart count 18 Jan 7 13:23:41.383: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 7 13:23:41.383: INFO: Container kube-proxy ready: true, restart count 0 Jan 7 13:23:41.383: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 7 13:23:41.383: INFO: Container kube-apiserver ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15e79d8c90af4338], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:23:42.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7740" for this suite. Jan 7 13:23:48.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:23:48.629: INFO: namespace sched-pred-7740 deletion completed in 6.145216208s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.387 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:23:48.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Jan 7 13:23:56.789: INFO: Pod pod-hostip-22699f82-3e81-4eaa-82f9-b187adcef5c2 has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:23:56.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3458" for this suite. Jan 7 13:24:18.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:24:18.953: INFO: namespace pods-3458 deletion completed in 22.156716288s • [SLOW TEST:30.324 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:24:18.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 7 13:24:19.112: INFO: Waiting up to 5m0s for pod "downward-api-b49db2b2-7aa1-49bf-a956-89bf59fbb033" in namespace "downward-api-7941" to be "success or failure" Jan 7 13:24:19.120: INFO: Pod "downward-api-b49db2b2-7aa1-49bf-a956-89bf59fbb033": Phase="Pending", Reason="", readiness=false. Elapsed: 7.694118ms Jan 7 13:24:21.134: INFO: Pod "downward-api-b49db2b2-7aa1-49bf-a956-89bf59fbb033": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022363014s Jan 7 13:24:23.148: INFO: Pod "downward-api-b49db2b2-7aa1-49bf-a956-89bf59fbb033": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036028875s Jan 7 13:24:25.156: INFO: Pod "downward-api-b49db2b2-7aa1-49bf-a956-89bf59fbb033": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043632154s Jan 7 13:24:27.164: INFO: Pod "downward-api-b49db2b2-7aa1-49bf-a956-89bf59fbb033": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052103725s Jan 7 13:24:29.177: INFO: Pod "downward-api-b49db2b2-7aa1-49bf-a956-89bf59fbb033": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065102569s STEP: Saw pod success Jan 7 13:24:29.177: INFO: Pod "downward-api-b49db2b2-7aa1-49bf-a956-89bf59fbb033" satisfied condition "success or failure" Jan 7 13:24:29.183: INFO: Trying to get logs from node iruya-node pod downward-api-b49db2b2-7aa1-49bf-a956-89bf59fbb033 container dapi-container: STEP: delete the pod Jan 7 13:24:29.318: INFO: Waiting for pod downward-api-b49db2b2-7aa1-49bf-a956-89bf59fbb033 to disappear Jan 7 13:24:29.336: INFO: Pod downward-api-b49db2b2-7aa1-49bf-a956-89bf59fbb033 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:24:29.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7941" for this suite. Jan 7 13:24:35.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:24:35.520: INFO: namespace downward-api-7941 deletion completed in 6.175796024s • [SLOW TEST:16.565 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:24:35.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-a01c5843-3ba6-40ca-8b61-e552c63b2c9d in namespace container-probe-2475 Jan 7 13:24:45.652: INFO: Started pod busybox-a01c5843-3ba6-40ca-8b61-e552c63b2c9d in namespace container-probe-2475 STEP: checking the pod's current state and verifying that restartCount is present Jan 7 13:24:45.657: INFO: Initial restart count of pod busybox-a01c5843-3ba6-40ca-8b61-e552c63b2c9d is 0 Jan 7 13:25:36.504: INFO: Restart count of pod container-probe-2475/busybox-a01c5843-3ba6-40ca-8b61-e552c63b2c9d is now 1 (50.846221259s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:25:36.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2475" for this suite. Jan 7 13:25:42.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:25:42.712: INFO: namespace container-probe-2475 deletion completed in 6.159711947s • [SLOW TEST:67.191 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:25:42.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Jan 7 13:25:42.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9146' Jan 7 13:25:45.026: INFO: stderr: "" Jan 7 13:25:45.026: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 7 13:25:45.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9146' Jan 7 13:25:45.236: INFO: stderr: "" Jan 7 13:25:45.237: INFO: stdout: "update-demo-nautilus-47hlm update-demo-nautilus-hvspq " Jan 7 13:25:45.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47hlm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9146' Jan 7 13:25:45.387: INFO: stderr: "" Jan 7 13:25:45.387: INFO: stdout: "" Jan 7 13:25:45.387: INFO: update-demo-nautilus-47hlm is created but not running Jan 7 13:25:50.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9146' Jan 7 13:25:50.610: INFO: stderr: "" Jan 7 13:25:50.610: INFO: stdout: "update-demo-nautilus-47hlm update-demo-nautilus-hvspq " Jan 7 13:25:50.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47hlm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9146' Jan 7 13:25:50.719: INFO: stderr: "" Jan 7 13:25:50.719: INFO: stdout: "" Jan 7 13:25:50.719: INFO: update-demo-nautilus-47hlm is created but not running Jan 7 13:25:55.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9146' Jan 7 13:25:55.943: INFO: stderr: "" Jan 7 13:25:55.944: INFO: stdout: "update-demo-nautilus-47hlm update-demo-nautilus-hvspq " Jan 7 13:25:55.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47hlm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9146' Jan 7 13:25:56.039: INFO: stderr: "" Jan 7 13:25:56.039: INFO: stdout: "true" Jan 7 13:25:56.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47hlm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9146' Jan 7 13:25:56.190: INFO: stderr: "" Jan 7 13:25:56.190: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 7 13:25:56.190: INFO: validating pod update-demo-nautilus-47hlm Jan 7 13:25:56.204: INFO: got data: { "image": "nautilus.jpg" } Jan 7 13:25:56.204: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 7 13:25:56.204: INFO: update-demo-nautilus-47hlm is verified up and running Jan 7 13:25:56.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hvspq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9146' Jan 7 13:25:56.287: INFO: stderr: "" Jan 7 13:25:56.287: INFO: stdout: "true" Jan 7 13:25:56.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hvspq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9146' Jan 7 13:25:56.418: INFO: stderr: "" Jan 7 13:25:56.418: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 7 13:25:56.418: INFO: validating pod update-demo-nautilus-hvspq Jan 7 13:25:56.444: INFO: got data: { "image": "nautilus.jpg" } Jan 7 13:25:56.445: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 7 13:25:56.445: INFO: update-demo-nautilus-hvspq is verified up and running STEP: rolling-update to new replication controller Jan 7 13:25:56.448: INFO: scanned /root for discovery docs: Jan 7 13:25:56.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9146' Jan 7 13:26:28.048: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 7 13:26:28.049: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 7 13:26:28.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9146' Jan 7 13:26:28.224: INFO: stderr: "" Jan 7 13:26:28.224: INFO: stdout: "update-demo-kitten-7g66c update-demo-kitten-9lm45 " Jan 7 13:26:28.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7g66c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9146' Jan 7 13:26:28.334: INFO: stderr: "" Jan 7 13:26:28.334: INFO: stdout: "true" Jan 7 13:26:28.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7g66c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9146' Jan 7 13:26:28.471: INFO: stderr: "" Jan 7 13:26:28.471: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 7 13:26:28.471: INFO: validating pod update-demo-kitten-7g66c Jan 7 13:26:28.488: INFO: got data: { "image": "kitten.jpg" } Jan 7 13:26:28.488: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 7 13:26:28.488: INFO: update-demo-kitten-7g66c is verified up and running Jan 7 13:26:28.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9lm45 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9146' Jan 7 13:26:28.601: INFO: stderr: "" Jan 7 13:26:28.602: INFO: stdout: "true" Jan 7 13:26:28.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9lm45 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9146' Jan 7 13:26:28.762: INFO: stderr: "" Jan 7 13:26:28.763: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 7 13:26:28.763: INFO: validating pod update-demo-kitten-9lm45 Jan 7 13:26:28.809: INFO: got data: { "image": "kitten.jpg" } Jan 7 13:26:28.809: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 7 13:26:28.809: INFO: update-demo-kitten-9lm45 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:26:28.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9146" for this suite. Jan 7 13:26:52.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:26:52.982: INFO: namespace kubectl-9146 deletion completed in 24.16596688s • [SLOW TEST:70.270 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:26:52.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 7 13:26:53.047: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 7 13:26:53.061: INFO: Waiting for terminating namespaces to be deleted... Jan 7 13:26:53.064: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 7 13:26:53.117: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 7 13:26:53.117: INFO: Container kube-proxy ready: true, restart count 0 Jan 7 13:26:53.117: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 7 13:26:53.117: INFO: Container weave ready: true, restart count 0 Jan 7 13:26:53.117: INFO: Container weave-npc ready: true, restart count 0 Jan 7 13:26:53.118: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 7 13:26:53.127: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 7 13:26:53.127: INFO: Container kube-scheduler ready: true, restart count 12 Jan 7 13:26:53.127: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 7 13:26:53.127: INFO: Container coredns ready: true, restart count 0 Jan 7 13:26:53.127: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 7 13:26:53.127: INFO: Container etcd ready: true, restart count 0 Jan 7 13:26:53.127: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 7 13:26:53.127: INFO: Container weave ready: true, restart count 0 Jan 7 13:26:53.127: INFO: Container weave-npc ready: true, restart count 0 Jan 7 13:26:53.127: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 7 13:26:53.127: INFO: Container coredns ready: true, restart count 0 Jan 7 13:26:53.127: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 7 13:26:53.127: INFO: Container kube-controller-manager ready: true, restart count 18 Jan 7 13:26:53.127: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 7 13:26:53.127: INFO: Container kube-proxy ready: true, restart count 0 Jan 7 13:26:53.127: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 7 13:26:53.127: INFO: Container kube-apiserver ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f83406f9-ff50-4b79-a48a-3978ab1c233a 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-f83406f9-ff50-4b79-a48a-3978ab1c233a off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-f83406f9-ff50-4b79-a48a-3978ab1c233a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:27:11.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9868" for this suite. Jan 7 13:27:29.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:27:29.542: INFO: namespace sched-pred-9868 deletion completed in 18.179734944s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:36.559 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:27:29.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jan 7 13:27:29.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7213' Jan 7 13:27:30.033: INFO: stderr: "" Jan 7 13:27:30.033: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 7 13:27:31.044: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:27:31.044: INFO: Found 0 / 1 Jan 7 13:27:32.046: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:27:32.046: INFO: Found 0 / 1 Jan 7 13:27:33.043: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:27:33.043: INFO: Found 0 / 1 Jan 7 13:27:34.051: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:27:34.051: INFO: Found 0 / 1 Jan 7 13:27:35.046: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:27:35.046: INFO: Found 0 / 1 Jan 7 13:27:36.047: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:27:36.047: INFO: Found 0 / 1 Jan 7 13:27:37.048: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:27:37.049: INFO: Found 0 / 1 Jan 7 13:27:38.043: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:27:38.043: INFO: Found 0 / 1 Jan 7 13:27:39.044: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:27:39.044: INFO: Found 1 / 1 Jan 7 13:27:39.044: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 7 13:27:39.051: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:27:39.051: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 7 13:27:39.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-kqpq6 --namespace=kubectl-7213 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 7 13:27:39.288: INFO: stderr: "" Jan 7 13:27:39.288: INFO: stdout: "pod/redis-master-kqpq6 patched\n" STEP: checking annotations Jan 7 13:27:39.312: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:27:39.312: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:27:39.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7213" for this suite. Jan 7 13:28:01.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:28:01.457: INFO: namespace kubectl-7213 deletion completed in 22.140178063s • [SLOW TEST:31.914 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:28:01.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 7 13:28:01.532: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:28:26.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3440" for this suite. Jan 7 13:28:32.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:28:32.752: INFO: namespace pods-3440 deletion completed in 6.140204125s • [SLOW TEST:31.294 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:28:32.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:29:32.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5384" for this suite. Jan 7 13:29:54.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:29:55.064: INFO: namespace container-probe-5384 deletion completed in 22.201337356s • [SLOW TEST:82.312 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:29:55.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-152e327f-25d5-4841-9fec-0f19e112a914 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:29:55.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1112" for this suite. Jan 7 13:30:01.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:30:01.392: INFO: namespace secrets-1112 deletion completed in 6.175670888s • [SLOW TEST:6.327 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:30:01.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Jan 7 13:30:01.561: INFO: Waiting up to 5m0s for pod "var-expansion-4cf57b7d-a9a7-4a99-a366-171b9a216b90" in namespace "var-expansion-4241" to be "success or failure" Jan 7 13:30:01.587: INFO: Pod "var-expansion-4cf57b7d-a9a7-4a99-a366-171b9a216b90": Phase="Pending", Reason="", readiness=false. Elapsed: 25.607325ms Jan 7 13:30:03.599: INFO: Pod "var-expansion-4cf57b7d-a9a7-4a99-a366-171b9a216b90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037837244s Jan 7 13:30:05.609: INFO: Pod "var-expansion-4cf57b7d-a9a7-4a99-a366-171b9a216b90": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048312101s Jan 7 13:30:07.619: INFO: Pod "var-expansion-4cf57b7d-a9a7-4a99-a366-171b9a216b90": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058072405s Jan 7 13:30:09.628: INFO: Pod "var-expansion-4cf57b7d-a9a7-4a99-a366-171b9a216b90": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066598321s Jan 7 13:30:11.638: INFO: Pod "var-expansion-4cf57b7d-a9a7-4a99-a366-171b9a216b90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07653713s STEP: Saw pod success Jan 7 13:30:11.638: INFO: Pod "var-expansion-4cf57b7d-a9a7-4a99-a366-171b9a216b90" satisfied condition "success or failure" Jan 7 13:30:11.643: INFO: Trying to get logs from node iruya-node pod var-expansion-4cf57b7d-a9a7-4a99-a366-171b9a216b90 container dapi-container: STEP: delete the pod Jan 7 13:30:11.845: INFO: Waiting for pod var-expansion-4cf57b7d-a9a7-4a99-a366-171b9a216b90 to disappear Jan 7 13:30:11.851: INFO: Pod var-expansion-4cf57b7d-a9a7-4a99-a366-171b9a216b90 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:30:11.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4241" for this suite. Jan 7 13:30:17.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:30:18.118: INFO: namespace var-expansion-4241 deletion completed in 6.215640548s • [SLOW TEST:16.726 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:30:18.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ddb91410-6167-4d7f-8a1f-fdaa08afd66a STEP: Creating a pod to test consume secrets Jan 7 13:30:18.262: INFO: Waiting up to 5m0s for pod "pod-secrets-e2179b7d-7233-4799-85fe-e8f7691a1a35" in namespace "secrets-9575" to be "success or failure" Jan 7 13:30:18.279: INFO: Pod "pod-secrets-e2179b7d-7233-4799-85fe-e8f7691a1a35": Phase="Pending", Reason="", readiness=false. Elapsed: 17.800645ms Jan 7 13:30:20.290: INFO: Pod "pod-secrets-e2179b7d-7233-4799-85fe-e8f7691a1a35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028035444s Jan 7 13:30:22.306: INFO: Pod "pod-secrets-e2179b7d-7233-4799-85fe-e8f7691a1a35": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04458216s Jan 7 13:30:24.624: INFO: Pod "pod-secrets-e2179b7d-7233-4799-85fe-e8f7691a1a35": Phase="Pending", Reason="", readiness=false. Elapsed: 6.362443952s Jan 7 13:30:26.637: INFO: Pod "pod-secrets-e2179b7d-7233-4799-85fe-e8f7691a1a35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.375267428s STEP: Saw pod success Jan 7 13:30:26.637: INFO: Pod "pod-secrets-e2179b7d-7233-4799-85fe-e8f7691a1a35" satisfied condition "success or failure" Jan 7 13:30:26.655: INFO: Trying to get logs from node iruya-node pod pod-secrets-e2179b7d-7233-4799-85fe-e8f7691a1a35 container secret-volume-test: STEP: delete the pod Jan 7 13:30:26.774: INFO: Waiting for pod pod-secrets-e2179b7d-7233-4799-85fe-e8f7691a1a35 to disappear Jan 7 13:30:26.777: INFO: Pod pod-secrets-e2179b7d-7233-4799-85fe-e8f7691a1a35 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:30:26.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9575" for this suite. Jan 7 13:30:32.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:30:32.926: INFO: namespace secrets-9575 deletion completed in 6.14427141s • [SLOW TEST:14.806 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:30:32.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4901.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4901.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4901.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4901.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 7 13:30:47.100: INFO: File wheezy_udp@dns-test-service-3.dns-4901.svc.cluster.local from pod dns-4901/dns-test-853ba023-2f11-4f17-99c9-f1d153da6591 contains '' instead of 'foo.example.com.' Jan 7 13:30:47.110: INFO: File jessie_udp@dns-test-service-3.dns-4901.svc.cluster.local from pod dns-4901/dns-test-853ba023-2f11-4f17-99c9-f1d153da6591 contains '' instead of 'foo.example.com.' Jan 7 13:30:47.110: INFO: Lookups using dns-4901/dns-test-853ba023-2f11-4f17-99c9-f1d153da6591 failed for: [wheezy_udp@dns-test-service-3.dns-4901.svc.cluster.local jessie_udp@dns-test-service-3.dns-4901.svc.cluster.local] Jan 7 13:30:52.134: INFO: DNS probes using dns-test-853ba023-2f11-4f17-99c9-f1d153da6591 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4901.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4901.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4901.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4901.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 7 13:31:06.413: INFO: File wheezy_udp@dns-test-service-3.dns-4901.svc.cluster.local from pod dns-4901/dns-test-407569fd-f2f2-4866-83ac-6ae0f36b9c6e contains '' instead of 'bar.example.com.' Jan 7 13:31:06.422: INFO: File jessie_udp@dns-test-service-3.dns-4901.svc.cluster.local from pod dns-4901/dns-test-407569fd-f2f2-4866-83ac-6ae0f36b9c6e contains '' instead of 'bar.example.com.' Jan 7 13:31:06.422: INFO: Lookups using dns-4901/dns-test-407569fd-f2f2-4866-83ac-6ae0f36b9c6e failed for: [wheezy_udp@dns-test-service-3.dns-4901.svc.cluster.local jessie_udp@dns-test-service-3.dns-4901.svc.cluster.local] Jan 7 13:31:11.445: INFO: File wheezy_udp@dns-test-service-3.dns-4901.svc.cluster.local from pod dns-4901/dns-test-407569fd-f2f2-4866-83ac-6ae0f36b9c6e contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 7 13:31:11.454: INFO: File jessie_udp@dns-test-service-3.dns-4901.svc.cluster.local from pod dns-4901/dns-test-407569fd-f2f2-4866-83ac-6ae0f36b9c6e contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 7 13:31:11.454: INFO: Lookups using dns-4901/dns-test-407569fd-f2f2-4866-83ac-6ae0f36b9c6e failed for: [wheezy_udp@dns-test-service-3.dns-4901.svc.cluster.local jessie_udp@dns-test-service-3.dns-4901.svc.cluster.local] Jan 7 13:31:16.444: INFO: File wheezy_udp@dns-test-service-3.dns-4901.svc.cluster.local from pod dns-4901/dns-test-407569fd-f2f2-4866-83ac-6ae0f36b9c6e contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 7 13:31:16.451: INFO: File jessie_udp@dns-test-service-3.dns-4901.svc.cluster.local from pod dns-4901/dns-test-407569fd-f2f2-4866-83ac-6ae0f36b9c6e contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 7 13:31:16.451: INFO: Lookups using dns-4901/dns-test-407569fd-f2f2-4866-83ac-6ae0f36b9c6e failed for: [wheezy_udp@dns-test-service-3.dns-4901.svc.cluster.local jessie_udp@dns-test-service-3.dns-4901.svc.cluster.local] Jan 7 13:31:21.475: INFO: File jessie_udp@dns-test-service-3.dns-4901.svc.cluster.local from pod dns-4901/dns-test-407569fd-f2f2-4866-83ac-6ae0f36b9c6e contains '' instead of 'bar.example.com.' Jan 7 13:31:21.475: INFO: Lookups using dns-4901/dns-test-407569fd-f2f2-4866-83ac-6ae0f36b9c6e failed for: [jessie_udp@dns-test-service-3.dns-4901.svc.cluster.local] Jan 7 13:31:26.450: INFO: DNS probes using dns-test-407569fd-f2f2-4866-83ac-6ae0f36b9c6e succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4901.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4901.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4901.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4901.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 7 13:31:42.742: INFO: File wheezy_udp@dns-test-service-3.dns-4901.svc.cluster.local from pod dns-4901/dns-test-80378b5b-64c1-4f8e-8ae4-ff0cc850b0bb contains '' instead of '10.100.162.41' Jan 7 13:31:42.752: INFO: File jessie_udp@dns-test-service-3.dns-4901.svc.cluster.local from pod dns-4901/dns-test-80378b5b-64c1-4f8e-8ae4-ff0cc850b0bb contains '' instead of '10.100.162.41' Jan 7 13:31:42.752: INFO: Lookups using dns-4901/dns-test-80378b5b-64c1-4f8e-8ae4-ff0cc850b0bb failed for: [wheezy_udp@dns-test-service-3.dns-4901.svc.cluster.local jessie_udp@dns-test-service-3.dns-4901.svc.cluster.local] Jan 7 13:31:47.777: INFO: DNS probes using dns-test-80378b5b-64c1-4f8e-8ae4-ff0cc850b0bb succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:31:48.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4901" for this suite. Jan 7 13:31:54.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:31:54.372: INFO: namespace dns-4901 deletion completed in 6.228252291s • [SLOW TEST:81.446 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:31:54.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Jan 7 13:31:54.469: INFO: Waiting up to 5m0s for pod "var-expansion-9d96530a-c5a3-4112-8f2c-3705524bd97b" in namespace "var-expansion-9658" to be "success or failure" Jan 7 13:31:54.478: INFO: Pod "var-expansion-9d96530a-c5a3-4112-8f2c-3705524bd97b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.662056ms Jan 7 13:31:56.497: INFO: Pod "var-expansion-9d96530a-c5a3-4112-8f2c-3705524bd97b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027436842s Jan 7 13:31:58.517: INFO: Pod "var-expansion-9d96530a-c5a3-4112-8f2c-3705524bd97b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047387833s Jan 7 13:32:00.530: INFO: Pod "var-expansion-9d96530a-c5a3-4112-8f2c-3705524bd97b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06051305s Jan 7 13:32:02.545: INFO: Pod "var-expansion-9d96530a-c5a3-4112-8f2c-3705524bd97b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075491741s STEP: Saw pod success Jan 7 13:32:02.545: INFO: Pod "var-expansion-9d96530a-c5a3-4112-8f2c-3705524bd97b" satisfied condition "success or failure" Jan 7 13:32:02.551: INFO: Trying to get logs from node iruya-node pod var-expansion-9d96530a-c5a3-4112-8f2c-3705524bd97b container dapi-container: STEP: delete the pod Jan 7 13:32:02.633: INFO: Waiting for pod var-expansion-9d96530a-c5a3-4112-8f2c-3705524bd97b to disappear Jan 7 13:32:02.639: INFO: Pod var-expansion-9d96530a-c5a3-4112-8f2c-3705524bd97b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:32:02.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9658" for this suite. Jan 7 13:32:08.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:32:08.854: INFO: namespace var-expansion-9658 deletion completed in 6.209583482s • [SLOW TEST:14.482 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:32:08.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Jan 7 13:32:08.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 7 13:32:09.157: INFO: stderr: "" Jan 7 13:32:09.157: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:32:09.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1404" for this suite. Jan 7 13:32:15.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:32:15.379: INFO: namespace kubectl-1404 deletion completed in 6.215249934s • [SLOW TEST:6.522 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:32:15.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:32:15.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-737" for this suite. Jan 7 13:32:21.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:32:21.699: INFO: namespace services-737 deletion completed in 6.195936418s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.319 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:32:21.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 7 13:32:21.867: INFO: Create a RollingUpdate DaemonSet Jan 7 13:32:21.883: INFO: Check that daemon pods launch on every node of the cluster Jan 7 13:32:21.897: INFO: Number of nodes with available pods: 0 Jan 7 13:32:21.897: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:32:23.134: INFO: Number of nodes with available pods: 0 Jan 7 13:32:23.135: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:32:23.932: INFO: Number of nodes with available pods: 0 Jan 7 13:32:23.932: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:32:25.018: INFO: Number of nodes with available pods: 0 Jan 7 13:32:25.018: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:32:25.917: INFO: Number of nodes with available pods: 0 Jan 7 13:32:25.917: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:32:27.437: INFO: Number of nodes with available pods: 0 Jan 7 13:32:27.438: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:32:27.921: INFO: Number of nodes with available pods: 0 Jan 7 13:32:27.921: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:32:29.001: INFO: Number of nodes with available pods: 0 Jan 7 13:32:29.001: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:32:29.913: INFO: Number of nodes with available pods: 0 Jan 7 13:32:29.914: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:32:31.006: INFO: Number of nodes with available pods: 1 Jan 7 13:32:31.006: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:32:31.916: INFO: Number of nodes with available pods: 1 Jan 7 13:32:31.916: INFO: Node iruya-node is running more than one daemon pod Jan 7 13:32:32.908: INFO: Number of nodes with available pods: 2 Jan 7 13:32:32.908: INFO: Number of running nodes: 2, number of available pods: 2 Jan 7 13:32:32.908: INFO: Update the DaemonSet to trigger a rollout Jan 7 13:32:32.917: INFO: Updating DaemonSet daemon-set Jan 7 13:32:46.945: INFO: Roll back the DaemonSet before rollout is complete Jan 7 13:32:46.955: INFO: Updating DaemonSet daemon-set Jan 7 13:32:46.956: INFO: Make sure DaemonSet rollback is complete Jan 7 13:32:46.966: INFO: Wrong image for pod: daemon-set-6xfh4. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 7 13:32:46.966: INFO: Pod daemon-set-6xfh4 is not available Jan 7 13:32:47.983: INFO: Wrong image for pod: daemon-set-6xfh4. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 7 13:32:47.983: INFO: Pod daemon-set-6xfh4 is not available Jan 7 13:32:48.986: INFO: Wrong image for pod: daemon-set-6xfh4. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 7 13:32:48.986: INFO: Pod daemon-set-6xfh4 is not available Jan 7 13:32:49.990: INFO: Wrong image for pod: daemon-set-6xfh4. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 7 13:32:49.990: INFO: Pod daemon-set-6xfh4 is not available Jan 7 13:32:50.987: INFO: Wrong image for pod: daemon-set-6xfh4. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 7 13:32:50.987: INFO: Pod daemon-set-6xfh4 is not available Jan 7 13:32:52.004: INFO: Wrong image for pod: daemon-set-6xfh4. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 7 13:32:52.004: INFO: Pod daemon-set-6xfh4 is not available Jan 7 13:32:52.992: INFO: Pod daemon-set-48tbh is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8546, will wait for the garbage collector to delete the pods Jan 7 13:32:53.116: INFO: Deleting DaemonSet.extensions daemon-set took: 52.531447ms Jan 7 13:32:53.417: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.12638ms Jan 7 13:33:07.925: INFO: Number of nodes with available pods: 0 Jan 7 13:33:07.925: INFO: Number of running nodes: 0, number of available pods: 0 Jan 7 13:33:07.929: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8546/daemonsets","resourceVersion":"19651626"},"items":null} Jan 7 13:33:07.937: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8546/pods","resourceVersion":"19651627"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:33:07.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8546" for this suite. Jan 7 13:33:13.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:33:14.182: INFO: namespace daemonsets-8546 deletion completed in 6.222648331s • [SLOW TEST:52.481 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:33:14.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 7 13:33:14.251: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 7 13:33:14.367: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 7 13:33:19.385: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 7 13:33:25.402: INFO: Creating deployment "test-rolling-update-deployment" Jan 7 13:33:25.415: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 7 13:33:25.446: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 7 13:33:27.462: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 7 13:33:27.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714000805, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714000805, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714000805, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714000805, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 13:33:29.477: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714000805, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714000805, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714000805, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714000805, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 13:33:31.475: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714000805, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714000805, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714000805, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714000805, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 13:33:33.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714000805, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714000805, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714000813, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714000805, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 13:33:35.477: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 7 13:33:35.493: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-9885,SelfLink:/apis/apps/v1/namespaces/deployment-9885/deployments/test-rolling-update-deployment,UID:71aa8120-4a76-4b8c-9920-a0daa38887eb,ResourceVersion:19651732,Generation:1,CreationTimestamp:2020-01-07 13:33:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-07 13:33:25 +0000 UTC 2020-01-07 13:33:25 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-07 13:33:33 +0000 UTC 2020-01-07 13:33:25 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 7 13:33:35.498: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-9885,SelfLink:/apis/apps/v1/namespaces/deployment-9885/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:eed6a44b-7a91-4d17-ab1a-dcc3d5206c2a,ResourceVersion:19651721,Generation:1,CreationTimestamp:2020-01-07 13:33:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 71aa8120-4a76-4b8c-9920-a0daa38887eb 0xc0027efc47 0xc0027efc48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 7 13:33:35.498: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 7 13:33:35.498: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-9885,SelfLink:/apis/apps/v1/namespaces/deployment-9885/replicasets/test-rolling-update-controller,UID:c55bed11-4689-44f3-8173-a790e1312431,ResourceVersion:19651730,Generation:2,CreationTimestamp:2020-01-07 13:33:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 71aa8120-4a76-4b8c-9920-a0daa38887eb 0xc0027efb5f 0xc0027efb70}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 7 13:33:35.504: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-z45jz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-z45jz,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-9885,SelfLink:/api/v1/namespaces/deployment-9885/pods/test-rolling-update-deployment-79f6b9d75c-z45jz,UID:a2c293bd-cd36-420e-acf2-4f4012e988d9,ResourceVersion:19651720,Generation:0,CreationTimestamp:2020-01-07 13:33:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c eed6a44b-7a91-4d17-ab1a-dcc3d5206c2a 0xc002b26517 0xc002b26518}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4spg7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4spg7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-4spg7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b26590} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b265b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:33:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:33:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:33:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:33:25 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-07 13:33:25 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-07 13:33:32 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://25d2ad0c60eb223a1977f8fb458db18e038b775cc6bc82b8d7a1ffbfd4fa4587}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:33:35.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9885" for this suite. Jan 7 13:33:41.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:33:41.652: INFO: namespace deployment-9885 deletion completed in 6.139169805s • [SLOW TEST:27.469 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:33:41.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-v24m STEP: Creating a pod to test atomic-volume-subpath Jan 7 13:33:41.955: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-v24m" in namespace "subpath-8559" to be "success or failure" Jan 7 13:33:41.978: INFO: Pod "pod-subpath-test-downwardapi-v24m": Phase="Pending", Reason="", readiness=false. Elapsed: 21.881121ms Jan 7 13:33:44.014: INFO: Pod "pod-subpath-test-downwardapi-v24m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058540777s Jan 7 13:33:46.023: INFO: Pod "pod-subpath-test-downwardapi-v24m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067801497s Jan 7 13:33:48.029: INFO: Pod "pod-subpath-test-downwardapi-v24m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073231528s Jan 7 13:33:50.071: INFO: Pod "pod-subpath-test-downwardapi-v24m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115393229s Jan 7 13:33:52.080: INFO: Pod "pod-subpath-test-downwardapi-v24m": Phase="Running", Reason="", readiness=true. Elapsed: 10.124482646s Jan 7 13:33:54.088: INFO: Pod "pod-subpath-test-downwardapi-v24m": Phase="Running", Reason="", readiness=true. Elapsed: 12.132362625s Jan 7 13:33:56.098: INFO: Pod "pod-subpath-test-downwardapi-v24m": Phase="Running", Reason="", readiness=true. Elapsed: 14.142236897s Jan 7 13:33:58.109: INFO: Pod "pod-subpath-test-downwardapi-v24m": Phase="Running", Reason="", readiness=true. Elapsed: 16.15309235s Jan 7 13:34:00.119: INFO: Pod "pod-subpath-test-downwardapi-v24m": Phase="Running", Reason="", readiness=true. Elapsed: 18.163103655s Jan 7 13:34:02.134: INFO: Pod "pod-subpath-test-downwardapi-v24m": Phase="Running", Reason="", readiness=true. Elapsed: 20.178077168s Jan 7 13:34:04.146: INFO: Pod "pod-subpath-test-downwardapi-v24m": Phase="Running", Reason="", readiness=true. Elapsed: 22.18987111s Jan 7 13:34:06.159: INFO: Pod "pod-subpath-test-downwardapi-v24m": Phase="Running", Reason="", readiness=true. Elapsed: 24.203775992s Jan 7 13:34:08.169: INFO: Pod "pod-subpath-test-downwardapi-v24m": Phase="Running", Reason="", readiness=true. Elapsed: 26.213001558s Jan 7 13:34:10.178: INFO: Pod "pod-subpath-test-downwardapi-v24m": Phase="Running", Reason="", readiness=true. Elapsed: 28.222765597s Jan 7 13:34:12.187: INFO: Pod "pod-subpath-test-downwardapi-v24m": Phase="Running", Reason="", readiness=true. Elapsed: 30.231759683s Jan 7 13:34:14.202: INFO: Pod "pod-subpath-test-downwardapi-v24m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.246386135s STEP: Saw pod success Jan 7 13:34:14.202: INFO: Pod "pod-subpath-test-downwardapi-v24m" satisfied condition "success or failure" Jan 7 13:34:14.206: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-v24m container test-container-subpath-downwardapi-v24m: STEP: delete the pod Jan 7 13:34:14.426: INFO: Waiting for pod pod-subpath-test-downwardapi-v24m to disappear Jan 7 13:34:14.448: INFO: Pod pod-subpath-test-downwardapi-v24m no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-v24m Jan 7 13:34:14.448: INFO: Deleting pod "pod-subpath-test-downwardapi-v24m" in namespace "subpath-8559" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:34:14.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8559" for this suite. Jan 7 13:34:20.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:34:20.669: INFO: namespace subpath-8559 deletion completed in 6.207061754s • [SLOW TEST:39.017 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:34:20.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-7636 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7636 to expose endpoints map[] Jan 7 13:34:20.850: INFO: Get endpoints failed (17.740764ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jan 7 13:34:21.862: INFO: successfully validated that service endpoint-test2 in namespace services-7636 exposes endpoints map[] (1.029859657s elapsed) STEP: Creating pod pod1 in namespace services-7636 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7636 to expose endpoints map[pod1:[80]] Jan 7 13:34:25.973: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.084807254s elapsed, will retry) Jan 7 13:34:30.019: INFO: successfully validated that service endpoint-test2 in namespace services-7636 exposes endpoints map[pod1:[80]] (8.130541286s elapsed) STEP: Creating pod pod2 in namespace services-7636 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7636 to expose endpoints map[pod1:[80] pod2:[80]] Jan 7 13:34:34.305: INFO: Unexpected endpoints: found map[5a37978f-e7f7-4a31-a941-8fb73163b787:[80]], expected map[pod1:[80] pod2:[80]] (4.276169593s elapsed, will retry) Jan 7 13:34:37.369: INFO: successfully validated that service endpoint-test2 in namespace services-7636 exposes endpoints map[pod1:[80] pod2:[80]] (7.340841561s elapsed) STEP: Deleting pod pod1 in namespace services-7636 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7636 to expose endpoints map[pod2:[80]] Jan 7 13:34:38.417: INFO: successfully validated that service endpoint-test2 in namespace services-7636 exposes endpoints map[pod2:[80]] (1.031934055s elapsed) STEP: Deleting pod pod2 in namespace services-7636 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7636 to expose endpoints map[] Jan 7 13:34:39.522: INFO: successfully validated that service endpoint-test2 in namespace services-7636 exposes endpoints map[] (1.085385093s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:34:40.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7636" for this suite. Jan 7 13:35:02.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:35:02.254: INFO: namespace services-7636 deletion completed in 22.219611139s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:41.584 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:35:02.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 7 13:35:11.582: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:35:11.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7736" for this suite. Jan 7 13:35:51.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:35:51.943: INFO: namespace replicaset-7736 deletion completed in 40.258010578s • [SLOW TEST:49.685 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:35:51.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7993 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7993 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7993 Jan 7 13:35:52.098: INFO: Found 0 stateful pods, waiting for 1 Jan 7 13:36:02.108: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 7 13:36:02.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 7 13:36:04.935: INFO: stderr: "I0107 13:36:04.411194 527 log.go:172] (0xc000c96580) (0xc000cc6780) Create stream\nI0107 13:36:04.411391 527 log.go:172] (0xc000c96580) (0xc000cc6780) Stream added, broadcasting: 1\nI0107 13:36:04.423574 527 log.go:172] (0xc000c96580) Reply frame received for 1\nI0107 13:36:04.423715 527 log.go:172] (0xc000c96580) (0xc000cc6820) Create stream\nI0107 13:36:04.423731 527 log.go:172] (0xc000c96580) (0xc000cc6820) Stream added, broadcasting: 3\nI0107 13:36:04.425614 527 log.go:172] (0xc000c96580) Reply frame received for 3\nI0107 13:36:04.425668 527 log.go:172] (0xc000c96580) (0xc000e9a000) Create stream\nI0107 13:36:04.425686 527 log.go:172] (0xc000c96580) (0xc000e9a000) Stream added, broadcasting: 5\nI0107 13:36:04.430837 527 log.go:172] (0xc000c96580) Reply frame received for 5\nI0107 13:36:04.676960 527 log.go:172] (0xc000c96580) Data frame received for 5\nI0107 13:36:04.677196 527 log.go:172] (0xc000e9a000) (5) Data frame handling\nI0107 13:36:04.677323 527 log.go:172] (0xc000e9a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0107 13:36:04.739601 527 log.go:172] (0xc000c96580) Data frame received for 3\nI0107 13:36:04.739723 527 log.go:172] (0xc000cc6820) (3) Data frame handling\nI0107 13:36:04.739770 527 log.go:172] (0xc000cc6820) (3) Data frame sent\nI0107 13:36:04.912111 527 log.go:172] (0xc000c96580) Data frame received for 1\nI0107 13:36:04.912351 527 log.go:172] (0xc000c96580) (0xc000cc6820) Stream removed, broadcasting: 3\nI0107 13:36:04.912575 527 log.go:172] (0xc000cc6780) (1) Data frame handling\nI0107 13:36:04.912648 527 log.go:172] (0xc000cc6780) (1) Data frame sent\nI0107 13:36:04.912662 527 log.go:172] (0xc000c96580) (0xc000cc6780) Stream removed, broadcasting: 1\nI0107 13:36:04.914420 527 log.go:172] (0xc000c96580) (0xc000e9a000) Stream removed, broadcasting: 5\nI0107 13:36:04.914609 527 log.go:172] (0xc000c96580) Go away received\nI0107 13:36:04.914728 527 log.go:172] (0xc000c96580) (0xc000cc6780) Stream removed, broadcasting: 1\nI0107 13:36:04.914776 527 log.go:172] (0xc000c96580) (0xc000cc6820) Stream removed, broadcasting: 3\nI0107 13:36:04.914809 527 log.go:172] (0xc000c96580) (0xc000e9a000) Stream removed, broadcasting: 5\n" Jan 7 13:36:04.935: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 7 13:36:04.935: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 7 13:36:04.944: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 7 13:36:14.961: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 7 13:36:14.961: INFO: Waiting for statefulset status.replicas updated to 0 Jan 7 13:36:14.993: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999714s Jan 7 13:36:16.003: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.987331082s Jan 7 13:36:17.052: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.978160047s Jan 7 13:36:18.061: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.928240243s Jan 7 13:36:19.169: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.920156829s Jan 7 13:36:20.179: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.812083143s Jan 7 13:36:21.193: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.801562191s Jan 7 13:36:22.208: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.787610631s Jan 7 13:36:23.225: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.772436024s Jan 7 13:36:24.243: INFO: Verifying statefulset ss doesn't scale past 1 for another 756.134107ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7993 Jan 7 13:36:25.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:36:26.004: INFO: stderr: "I0107 13:36:25.513895 562 log.go:172] (0xc00013afd0) (0xc00067a960) Create stream\nI0107 13:36:25.514051 562 log.go:172] (0xc00013afd0) (0xc00067a960) Stream added, broadcasting: 1\nI0107 13:36:25.531659 562 log.go:172] (0xc00013afd0) Reply frame received for 1\nI0107 13:36:25.531709 562 log.go:172] (0xc00013afd0) (0xc000910000) Create stream\nI0107 13:36:25.531720 562 log.go:172] (0xc00013afd0) (0xc000910000) Stream added, broadcasting: 3\nI0107 13:36:25.533144 562 log.go:172] (0xc00013afd0) Reply frame received for 3\nI0107 13:36:25.533186 562 log.go:172] (0xc00013afd0) (0xc00067a1e0) Create stream\nI0107 13:36:25.533215 562 log.go:172] (0xc00013afd0) (0xc00067a1e0) Stream added, broadcasting: 5\nI0107 13:36:25.534730 562 log.go:172] (0xc00013afd0) Reply frame received for 5\nI0107 13:36:25.656130 562 log.go:172] (0xc00013afd0) Data frame received for 5\nI0107 13:36:25.656247 562 log.go:172] (0xc00067a1e0) (5) Data frame handling\nI0107 13:36:25.656283 562 log.go:172] (0xc00067a1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0107 13:36:25.656375 562 log.go:172] (0xc00013afd0) Data frame received for 3\nI0107 13:36:25.656461 562 log.go:172] (0xc000910000) (3) Data frame handling\nI0107 13:36:25.656491 562 log.go:172] (0xc000910000) (3) Data frame sent\nI0107 13:36:25.975353 562 log.go:172] (0xc00013afd0) Data frame received for 1\nI0107 13:36:25.975651 562 log.go:172] (0xc00013afd0) (0xc000910000) Stream removed, broadcasting: 3\nI0107 13:36:25.975880 562 log.go:172] (0xc00067a960) (1) Data frame handling\nI0107 13:36:25.975973 562 log.go:172] (0xc00067a960) (1) Data frame sent\nI0107 13:36:25.976006 562 log.go:172] (0xc00013afd0) (0xc00067a960) Stream removed, broadcasting: 1\nI0107 13:36:25.977461 562 log.go:172] (0xc00013afd0) (0xc00067a1e0) Stream removed, broadcasting: 5\nI0107 13:36:25.978170 562 log.go:172] (0xc00013afd0) Go away received\nI0107 13:36:25.978259 562 log.go:172] (0xc00013afd0) (0xc00067a960) Stream removed, broadcasting: 1\nI0107 13:36:25.978308 562 log.go:172] (0xc00013afd0) (0xc000910000) Stream removed, broadcasting: 3\nI0107 13:36:25.978338 562 log.go:172] (0xc00013afd0) (0xc00067a1e0) Stream removed, broadcasting: 5\n" Jan 7 13:36:26.005: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 7 13:36:26.005: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 7 13:36:26.017: INFO: Found 2 stateful pods, waiting for 3 Jan 7 13:36:36.026: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 7 13:36:36.026: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 7 13:36:36.026: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 7 13:36:46.041: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 7 13:36:46.042: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 7 13:36:46.042: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 7 13:36:46.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 7 13:36:46.619: INFO: stderr: "I0107 13:36:46.276482 584 log.go:172] (0xc0001166e0) (0xc000864820) Create stream\nI0107 13:36:46.276745 584 log.go:172] (0xc0001166e0) (0xc000864820) Stream added, broadcasting: 1\nI0107 13:36:46.284411 584 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0107 13:36:46.284467 584 log.go:172] (0xc0001166e0) (0xc00070e3c0) Create stream\nI0107 13:36:46.284492 584 log.go:172] (0xc0001166e0) (0xc00070e3c0) Stream added, broadcasting: 3\nI0107 13:36:46.286531 584 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0107 13:36:46.286598 584 log.go:172] (0xc0001166e0) (0xc0009fc000) Create stream\nI0107 13:36:46.286609 584 log.go:172] (0xc0001166e0) (0xc0009fc000) Stream added, broadcasting: 5\nI0107 13:36:46.287652 584 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0107 13:36:46.385291 584 log.go:172] (0xc0001166e0) Data frame received for 3\nI0107 13:36:46.385368 584 log.go:172] (0xc00070e3c0) (3) Data frame handling\nI0107 13:36:46.385386 584 log.go:172] (0xc00070e3c0) (3) Data frame sent\nI0107 13:36:46.385447 584 log.go:172] (0xc0001166e0) Data frame received for 5\nI0107 13:36:46.385467 584 log.go:172] (0xc0009fc000) (5) Data frame handling\nI0107 13:36:46.385488 584 log.go:172] (0xc0009fc000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0107 13:36:46.595351 584 log.go:172] (0xc0001166e0) (0xc00070e3c0) Stream removed, broadcasting: 3\nI0107 13:36:46.595553 584 log.go:172] (0xc0001166e0) Data frame received for 1\nI0107 13:36:46.595671 584 log.go:172] (0xc0001166e0) (0xc0009fc000) Stream removed, broadcasting: 5\nI0107 13:36:46.596007 584 log.go:172] (0xc000864820) (1) Data frame handling\nI0107 13:36:46.596055 584 log.go:172] (0xc000864820) (1) Data frame sent\nI0107 13:36:46.596068 584 log.go:172] (0xc0001166e0) (0xc000864820) Stream removed, broadcasting: 1\nI0107 13:36:46.596105 584 log.go:172] (0xc0001166e0) Go away received\nI0107 13:36:46.598354 584 log.go:172] (0xc0001166e0) (0xc000864820) Stream removed, broadcasting: 1\nI0107 13:36:46.598374 584 log.go:172] (0xc0001166e0) (0xc00070e3c0) Stream removed, broadcasting: 3\nI0107 13:36:46.598383 584 log.go:172] (0xc0001166e0) (0xc0009fc000) Stream removed, broadcasting: 5\n" Jan 7 13:36:46.619: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 7 13:36:46.619: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 7 13:36:46.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 7 13:36:47.219: INFO: stderr: "I0107 13:36:46.865697 604 log.go:172] (0xc000992420) (0xc0005746e0) Create stream\nI0107 13:36:46.866105 604 log.go:172] (0xc000992420) (0xc0005746e0) Stream added, broadcasting: 1\nI0107 13:36:46.871648 604 log.go:172] (0xc000992420) Reply frame received for 1\nI0107 13:36:46.871756 604 log.go:172] (0xc000992420) (0xc000932000) Create stream\nI0107 13:36:46.871766 604 log.go:172] (0xc000992420) (0xc000932000) Stream added, broadcasting: 3\nI0107 13:36:46.873453 604 log.go:172] (0xc000992420) Reply frame received for 3\nI0107 13:36:46.873489 604 log.go:172] (0xc000992420) (0xc0009320a0) Create stream\nI0107 13:36:46.873497 604 log.go:172] (0xc000992420) (0xc0009320a0) Stream added, broadcasting: 5\nI0107 13:36:46.874523 604 log.go:172] (0xc000992420) Reply frame received for 5\nI0107 13:36:47.029117 604 log.go:172] (0xc000992420) Data frame received for 5\nI0107 13:36:47.029188 604 log.go:172] (0xc0009320a0) (5) Data frame handling\nI0107 13:36:47.029211 604 log.go:172] (0xc0009320a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0107 13:36:47.105869 604 log.go:172] (0xc000992420) Data frame received for 3\nI0107 13:36:47.106117 604 log.go:172] (0xc000932000) (3) Data frame handling\nI0107 13:36:47.106207 604 log.go:172] (0xc000932000) (3) Data frame sent\nI0107 13:36:47.207462 604 log.go:172] (0xc000992420) (0xc000932000) Stream removed, broadcasting: 3\nI0107 13:36:47.207948 604 log.go:172] (0xc000992420) Data frame received for 1\nI0107 13:36:47.208044 604 log.go:172] (0xc0005746e0) (1) Data frame handling\nI0107 13:36:47.208127 604 log.go:172] (0xc0005746e0) (1) Data frame sent\nI0107 13:36:47.208461 604 log.go:172] (0xc000992420) (0xc0005746e0) Stream removed, broadcasting: 1\nI0107 13:36:47.208841 604 log.go:172] (0xc000992420) (0xc0009320a0) Stream removed, broadcasting: 5\nI0107 13:36:47.209023 604 log.go:172] (0xc000992420) Go away received\nI0107 13:36:47.209256 604 log.go:172] (0xc000992420) (0xc0005746e0) Stream removed, broadcasting: 1\nI0107 13:36:47.209340 604 log.go:172] (0xc000992420) (0xc000932000) Stream removed, broadcasting: 3\nI0107 13:36:47.209412 604 log.go:172] (0xc000992420) (0xc0009320a0) Stream removed, broadcasting: 5\n" Jan 7 13:36:47.220: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 7 13:36:47.220: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 7 13:36:47.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 7 13:36:47.942: INFO: stderr: "I0107 13:36:47.481903 622 log.go:172] (0xc000116e70) (0xc000420780) Create stream\nI0107 13:36:47.482119 622 log.go:172] (0xc000116e70) (0xc000420780) Stream added, broadcasting: 1\nI0107 13:36:47.497117 622 log.go:172] (0xc000116e70) Reply frame received for 1\nI0107 13:36:47.503075 622 log.go:172] (0xc000116e70) (0xc00094a000) Create stream\nI0107 13:36:47.507465 622 log.go:172] (0xc000116e70) (0xc00094a000) Stream added, broadcasting: 3\nI0107 13:36:47.514299 622 log.go:172] (0xc000116e70) Reply frame received for 3\nI0107 13:36:47.514391 622 log.go:172] (0xc000116e70) (0xc00094a0a0) Create stream\nI0107 13:36:47.514414 622 log.go:172] (0xc000116e70) (0xc00094a0a0) Stream added, broadcasting: 5\nI0107 13:36:47.518025 622 log.go:172] (0xc000116e70) Reply frame received for 5\nI0107 13:36:47.712249 622 log.go:172] (0xc000116e70) Data frame received for 5\nI0107 13:36:47.712895 622 log.go:172] (0xc00094a0a0) (5) Data frame handling\nI0107 13:36:47.713114 622 log.go:172] (0xc00094a0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0107 13:36:47.729724 622 log.go:172] (0xc000116e70) Data frame received for 3\nI0107 13:36:47.729937 622 log.go:172] (0xc00094a000) (3) Data frame handling\nI0107 13:36:47.730030 622 log.go:172] (0xc00094a000) (3) Data frame sent\nI0107 13:36:47.926942 622 log.go:172] (0xc000116e70) Data frame received for 1\nI0107 13:36:47.927249 622 log.go:172] (0xc000116e70) (0xc00094a0a0) Stream removed, broadcasting: 5\nI0107 13:36:47.927358 622 log.go:172] (0xc000420780) (1) Data frame handling\nI0107 13:36:47.927397 622 log.go:172] (0xc000420780) (1) Data frame sent\nI0107 13:36:47.927759 622 log.go:172] (0xc000116e70) (0xc00094a000) Stream removed, broadcasting: 3\nI0107 13:36:47.927891 622 log.go:172] (0xc000116e70) (0xc000420780) Stream removed, broadcasting: 1\nI0107 13:36:47.927912 622 log.go:172] (0xc000116e70) Go away received\nI0107 13:36:47.929921 622 log.go:172] (0xc000116e70) (0xc000420780) Stream removed, broadcasting: 1\nI0107 13:36:47.929946 622 log.go:172] (0xc000116e70) (0xc00094a000) Stream removed, broadcasting: 3\nI0107 13:36:47.929959 622 log.go:172] (0xc000116e70) (0xc00094a0a0) Stream removed, broadcasting: 5\n" Jan 7 13:36:47.943: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 7 13:36:47.943: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 7 13:36:47.943: INFO: Waiting for statefulset status.replicas updated to 0 Jan 7 13:36:47.978: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 7 13:36:57.994: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 7 13:36:57.994: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 7 13:36:57.994: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 7 13:36:58.017: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999655s Jan 7 13:36:59.033: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986921883s Jan 7 13:37:00.061: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.969812133s Jan 7 13:37:01.070: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.942485479s Jan 7 13:37:02.557: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.933600845s Jan 7 13:37:03.567: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.446064231s Jan 7 13:37:04.582: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.436087136s Jan 7 13:37:05.595: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.421172899s Jan 7 13:37:06.610: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.408411584s Jan 7 13:37:07.628: INFO: Verifying statefulset ss doesn't scale past 3 for another 392.892548ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7993 Jan 7 13:37:08.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:37:09.251: INFO: stderr: "I0107 13:37:08.865614 644 log.go:172] (0xc00083e2c0) (0xc0008226e0) Create stream\nI0107 13:37:08.865840 644 log.go:172] (0xc00083e2c0) (0xc0008226e0) Stream added, broadcasting: 1\nI0107 13:37:08.875214 644 log.go:172] (0xc00083e2c0) Reply frame received for 1\nI0107 13:37:08.875375 644 log.go:172] (0xc00083e2c0) (0xc000694460) Create stream\nI0107 13:37:08.875434 644 log.go:172] (0xc00083e2c0) (0xc000694460) Stream added, broadcasting: 3\nI0107 13:37:08.878434 644 log.go:172] (0xc00083e2c0) Reply frame received for 3\nI0107 13:37:08.878485 644 log.go:172] (0xc00083e2c0) (0xc000822780) Create stream\nI0107 13:37:08.878498 644 log.go:172] (0xc00083e2c0) (0xc000822780) Stream added, broadcasting: 5\nI0107 13:37:08.880709 644 log.go:172] (0xc00083e2c0) Reply frame received for 5\nI0107 13:37:09.074763 644 log.go:172] (0xc00083e2c0) Data frame received for 3\nI0107 13:37:09.074915 644 log.go:172] (0xc000694460) (3) Data frame handling\nI0107 13:37:09.074957 644 log.go:172] (0xc000694460) (3) Data frame sent\nI0107 13:37:09.075032 644 log.go:172] (0xc00083e2c0) Data frame received for 5\nI0107 13:37:09.075074 644 log.go:172] (0xc000822780) (5) Data frame handling\nI0107 13:37:09.075129 644 log.go:172] (0xc000822780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0107 13:37:09.233299 644 log.go:172] (0xc00083e2c0) Data frame received for 1\nI0107 13:37:09.233475 644 log.go:172] (0xc00083e2c0) (0xc000694460) Stream removed, broadcasting: 3\nI0107 13:37:09.233542 644 log.go:172] (0xc0008226e0) (1) Data frame handling\nI0107 13:37:09.233615 644 log.go:172] (0xc0008226e0) (1) Data frame sent\nI0107 13:37:09.233629 644 log.go:172] (0xc00083e2c0) (0xc000822780) Stream removed, broadcasting: 5\nI0107 13:37:09.233741 644 log.go:172] (0xc00083e2c0) (0xc0008226e0) Stream removed, broadcasting: 1\nI0107 13:37:09.233764 644 log.go:172] (0xc00083e2c0) Go away received\nI0107 13:37:09.235345 644 log.go:172] (0xc00083e2c0) (0xc0008226e0) Stream removed, broadcasting: 1\nI0107 13:37:09.235370 644 log.go:172] (0xc00083e2c0) (0xc000694460) Stream removed, broadcasting: 3\nI0107 13:37:09.235382 644 log.go:172] (0xc00083e2c0) (0xc000822780) Stream removed, broadcasting: 5\n" Jan 7 13:37:09.252: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 7 13:37:09.252: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 7 13:37:09.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:37:09.626: INFO: stderr: "I0107 13:37:09.446806 661 log.go:172] (0xc0009e4630) (0xc000672a00) Create stream\nI0107 13:37:09.447006 661 log.go:172] (0xc0009e4630) (0xc000672a00) Stream added, broadcasting: 1\nI0107 13:37:09.457996 661 log.go:172] (0xc0009e4630) Reply frame received for 1\nI0107 13:37:09.458070 661 log.go:172] (0xc0009e4630) (0xc000672280) Create stream\nI0107 13:37:09.458088 661 log.go:172] (0xc0009e4630) (0xc000672280) Stream added, broadcasting: 3\nI0107 13:37:09.460542 661 log.go:172] (0xc0009e4630) Reply frame received for 3\nI0107 13:37:09.460637 661 log.go:172] (0xc0009e4630) (0xc00096c000) Create stream\nI0107 13:37:09.460680 661 log.go:172] (0xc0009e4630) (0xc00096c000) Stream added, broadcasting: 5\nI0107 13:37:09.464154 661 log.go:172] (0xc0009e4630) Reply frame received for 5\nI0107 13:37:09.541464 661 log.go:172] (0xc0009e4630) Data frame received for 3\nI0107 13:37:09.541494 661 log.go:172] (0xc000672280) (3) Data frame handling\nI0107 13:37:09.541545 661 log.go:172] (0xc000672280) (3) Data frame sent\nI0107 13:37:09.541619 661 log.go:172] (0xc0009e4630) Data frame received for 5\nI0107 13:37:09.541649 661 log.go:172] (0xc00096c000) (5) Data frame handling\nI0107 13:37:09.541666 661 log.go:172] (0xc00096c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0107 13:37:09.616154 661 log.go:172] (0xc0009e4630) (0xc000672280) Stream removed, broadcasting: 3\nI0107 13:37:09.616244 661 log.go:172] (0xc0009e4630) Data frame received for 1\nI0107 13:37:09.616276 661 log.go:172] (0xc000672a00) (1) Data frame handling\nI0107 13:37:09.616293 661 log.go:172] (0xc000672a00) (1) Data frame sent\nI0107 13:37:09.616316 661 log.go:172] (0xc0009e4630) (0xc000672a00) Stream removed, broadcasting: 1\nI0107 13:37:09.616336 661 log.go:172] (0xc0009e4630) (0xc00096c000) Stream removed, broadcasting: 5\nI0107 13:37:09.616576 661 log.go:172] (0xc0009e4630) Go away received\nI0107 13:37:09.617337 661 log.go:172] (0xc0009e4630) (0xc000672a00) Stream removed, broadcasting: 1\nI0107 13:37:09.617353 661 log.go:172] (0xc0009e4630) (0xc000672280) Stream removed, broadcasting: 3\nI0107 13:37:09.617364 661 log.go:172] (0xc0009e4630) (0xc00096c000) Stream removed, broadcasting: 5\n" Jan 7 13:37:09.626: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 7 13:37:09.626: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 7 13:37:09.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:37:10.253: INFO: rc: 126 Jan 7 13:37:10.254: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "process_linux.go:91: executing setns process caused \"exit status 21\"": unknown I0107 13:37:09.883638 681 log.go:172] (0xc00040c420) (0xc000896640) Create stream I0107 13:37:09.884030 681 log.go:172] (0xc00040c420) (0xc000896640) Stream added, broadcasting: 1 I0107 13:37:09.889962 681 log.go:172] (0xc00040c420) Reply frame received for 1 I0107 13:37:09.890018 681 log.go:172] (0xc00040c420) (0xc00097c000) Create stream I0107 13:37:09.890030 681 log.go:172] (0xc00040c420) (0xc00097c000) Stream added, broadcasting: 3 I0107 13:37:09.891464 681 log.go:172] (0xc00040c420) Reply frame received for 3 I0107 13:37:09.891484 681 log.go:172] (0xc00040c420) (0xc00097c0a0) Create stream I0107 13:37:09.891489 681 log.go:172] (0xc00040c420) (0xc00097c0a0) Stream added, broadcasting: 5 I0107 13:37:09.892384 681 log.go:172] (0xc00040c420) Reply frame received for 5 I0107 13:37:10.232556 681 log.go:172] (0xc00040c420) Data frame received for 1 I0107 13:37:10.232754 681 log.go:172] (0xc000896640) (1) Data frame handling I0107 13:37:10.232803 681 log.go:172] (0xc000896640) (1) Data frame sent I0107 13:37:10.232874 681 log.go:172] (0xc00040c420) (0xc000896640) Stream removed, broadcasting: 1 I0107 13:37:10.233948 681 log.go:172] (0xc00040c420) (0xc00097c0a0) Stream removed, broadcasting: 5 I0107 13:37:10.234495 681 log.go:172] (0xc00040c420) Data frame received for 3 I0107 13:37:10.235091 681 log.go:172] (0xc00097c000) (3) Data frame handling I0107 13:37:10.235347 681 log.go:172] (0xc00097c000) (3) Data frame sent I0107 13:37:10.235432 681 log.go:172] (0xc00040c420) (0xc00097c000) Stream removed, broadcasting: 3 I0107 13:37:10.235524 681 log.go:172] (0xc00040c420) Go away received I0107 13:37:10.236816 681 log.go:172] (0xc00040c420) (0xc000896640) Stream removed, broadcasting: 1 I0107 13:37:10.236852 681 log.go:172] (0xc00040c420) (0xc00097c000) Stream removed, broadcasting: 3 I0107 13:37:10.236865 681 log.go:172] (0xc00040c420) (0xc00097c0a0) Stream removed, broadcasting: 5 command terminated with exit code 126 [] 0xc0029aa960 exit status 126 true [0xc002202140 0xc002202158 0xc002202170] [0xc002202140 0xc002202158 0xc002202170] [0xc002202150 0xc002202168] [0xba6c50 0xba6c50] 0xc002bd0d20 }: Command stdout: OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "process_linux.go:91: executing setns process caused \"exit status 21\"": unknown stderr: I0107 13:37:09.883638 681 log.go:172] (0xc00040c420) (0xc000896640) Create stream I0107 13:37:09.884030 681 log.go:172] (0xc00040c420) (0xc000896640) Stream added, broadcasting: 1 I0107 13:37:09.889962 681 log.go:172] (0xc00040c420) Reply frame received for 1 I0107 13:37:09.890018 681 log.go:172] (0xc00040c420) (0xc00097c000) Create stream I0107 13:37:09.890030 681 log.go:172] (0xc00040c420) (0xc00097c000) Stream added, broadcasting: 3 I0107 13:37:09.891464 681 log.go:172] (0xc00040c420) Reply frame received for 3 I0107 13:37:09.891484 681 log.go:172] (0xc00040c420) (0xc00097c0a0) Create stream I0107 13:37:09.891489 681 log.go:172] (0xc00040c420) (0xc00097c0a0) Stream added, broadcasting: 5 I0107 13:37:09.892384 681 log.go:172] (0xc00040c420) Reply frame received for 5 I0107 13:37:10.232556 681 log.go:172] (0xc00040c420) Data frame received for 1 I0107 13:37:10.232754 681 log.go:172] (0xc000896640) (1) Data frame handling I0107 13:37:10.232803 681 log.go:172] (0xc000896640) (1) Data frame sent I0107 13:37:10.232874 681 log.go:172] (0xc00040c420) (0xc000896640) Stream removed, broadcasting: 1 I0107 13:37:10.233948 681 log.go:172] (0xc00040c420) (0xc00097c0a0) Stream removed, broadcasting: 5 I0107 13:37:10.234495 681 log.go:172] (0xc00040c420) Data frame received for 3 I0107 13:37:10.235091 681 log.go:172] (0xc00097c000) (3) Data frame handling I0107 13:37:10.235347 681 log.go:172] (0xc00097c000) (3) Data frame sent I0107 13:37:10.235432 681 log.go:172] (0xc00040c420) (0xc00097c000) Stream removed, broadcasting: 3 I0107 13:37:10.235524 681 log.go:172] (0xc00040c420) Go away received I0107 13:37:10.236816 681 log.go:172] (0xc00040c420) (0xc000896640) Stream removed, broadcasting: 1 I0107 13:37:10.236852 681 log.go:172] (0xc00040c420) (0xc00097c000) Stream removed, broadcasting: 3 I0107 13:37:10.236865 681 log.go:172] (0xc00040c420) (0xc00097c0a0) Stream removed, broadcasting: 5 command terminated with exit code 126 error: exit status 126 Jan 7 13:37:20.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:37:20.583: INFO: rc: 1 Jan 7 13:37:20.584: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc002f93d40 exit status 1 true [0xc0021c01e0 0xc0021c01f8 0xc0021c0218] [0xc0021c01e0 0xc0021c01f8 0xc0021c0218] [0xc0021c01f0 0xc0021c0210] [0xba6c50 0xba6c50] 0xc0025da9c0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 7 13:37:30.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:37:30.758: INFO: rc: 1 Jan 7 13:37:30.759: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024d65d0 exit status 1 true [0xc00303e128 0xc00303e140 0xc00303e158] [0xc00303e128 0xc00303e140 0xc00303e158] [0xc00303e138 0xc00303e150] [0xba6c50 0xba6c50] 0xc00242f9e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:37:40.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:37:40.969: INFO: rc: 1 Jan 7 13:37:40.969: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0029aaa20 exit status 1 true [0xc002202178 0xc002202190 0xc0022021a8] [0xc002202178 0xc002202190 0xc0022021a8] [0xc002202188 0xc0022021a0] [0xba6c50 0xba6c50] 0xc002bd1080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:37:50.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:37:51.113: INFO: rc: 1 Jan 7 13:37:51.113: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002f93e00 exit status 1 true [0xc0021c0220 0xc0021c0238 0xc0021c0250] [0xc0021c0220 0xc0021c0238 0xc0021c0250] [0xc0021c0230 0xc0021c0248] [0xba6c50 0xba6c50] 0xc0025dade0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:38:01.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:38:01.340: INFO: rc: 1 Jan 7 13:38:01.340: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002f93ef0 exit status 1 true [0xc0021c0258 0xc0021c0270 0xc0021c0288] [0xc0021c0258 0xc0021c0270 0xc0021c0288] [0xc0021c0268 0xc0021c0280] [0xba6c50 0xba6c50] 0xc0025db200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:38:11.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:38:11.511: INFO: rc: 1 Jan 7 13:38:11.511: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024d66f0 exit status 1 true [0xc00303e160 0xc00303e178 0xc00303e190] [0xc00303e160 0xc00303e178 0xc00303e190] [0xc00303e170 0xc00303e188] [0xba6c50 0xba6c50] 0xc00242fce0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:38:21.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:38:21.729: INFO: rc: 1 Jan 7 13:38:21.730: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c3c090 exit status 1 true [0xc002202000 0xc002202018 0xc002202030] [0xc002202000 0xc002202018 0xc002202030] [0xc002202010 0xc002202028] [0xba6c50 0xba6c50] 0xc0023dc5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:38:31.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:38:31.921: INFO: rc: 1 Jan 7 13:38:31.921: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0026bc0c0 exit status 1 true [0xc0021c0000 0xc0021c0018 0xc0021c0030] [0xc0021c0000 0xc0021c0018 0xc0021c0030] [0xc0021c0010 0xc0021c0028] [0xba6c50 0xba6c50] 0xc002271560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:38:41.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:38:42.097: INFO: rc: 1 Jan 7 13:38:42.097: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001216090 exit status 1 true [0xc00303e000 0xc00303e018 0xc00303e030] [0xc00303e000 0xc00303e018 0xc00303e030] [0xc00303e010 0xc00303e028] [0xba6c50 0xba6c50] 0xc002c24240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:38:52.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:38:52.277: INFO: rc: 1 Jan 7 13:38:52.278: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001216150 exit status 1 true [0xc00303e038 0xc00303e050 0xc00303e068] [0xc00303e038 0xc00303e050 0xc00303e068] [0xc00303e048 0xc00303e060] [0xba6c50 0xba6c50] 0xc002c246c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:39:02.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:39:02.455: INFO: rc: 1 Jan 7 13:39:02.455: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c3c180 exit status 1 true [0xc002202038 0xc002202050 0xc002202068] [0xc002202038 0xc002202050 0xc002202068] [0xc002202048 0xc002202060] [0xba6c50 0xba6c50] 0xc0023ddec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:39:12.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:39:12.654: INFO: rc: 1 Jan 7 13:39:12.654: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c3c240 exit status 1 true [0xc002202070 0xc002202088 0xc0022020a0] [0xc002202070 0xc002202088 0xc0022020a0] [0xc002202080 0xc002202098] [0xba6c50 0xba6c50] 0xc0028a2720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:39:22.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:39:22.806: INFO: rc: 1 Jan 7 13:39:22.806: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c3c330 exit status 1 true [0xc0022020a8 0xc0022020c0 0xc0022020d8] [0xc0022020a8 0xc0022020c0 0xc0022020d8] [0xc0022020b8 0xc0022020d0] [0xba6c50 0xba6c50] 0xc0028a2f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:39:32.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:39:33.023: INFO: rc: 1 Jan 7 13:39:33.024: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000f7a090 exit status 1 true [0xc000d3a008 0xc000d3a4b0 0xc000d3a958] [0xc000d3a008 0xc000d3a4b0 0xc000d3a958] [0xc000d3a480 0xc000d3a818] [0xba6c50 0xba6c50] 0xc002bd0240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:39:43.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:39:43.227: INFO: rc: 1 Jan 7 13:39:43.228: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0026bc210 exit status 1 true [0xc0021c0038 0xc0021c0050 0xc0021c0068] [0xc0021c0038 0xc0021c0050 0xc0021c0068] [0xc0021c0048 0xc0021c0060] [0xba6c50 0xba6c50] 0xc0025da060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:39:53.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:39:53.469: INFO: rc: 1 Jan 7 13:39:53.469: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001216210 exit status 1 true [0xc00303e070 0xc00303e088 0xc00303e0a0] [0xc00303e070 0xc00303e088 0xc00303e0a0] [0xc00303e080 0xc00303e098] [0xba6c50 0xba6c50] 0xc002c24cc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:40:03.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:40:03.665: INFO: rc: 1 Jan 7 13:40:03.666: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c3c3f0 exit status 1 true [0xc0022020e0 0xc0022020f8 0xc002202110] [0xc0022020e0 0xc0022020f8 0xc002202110] [0xc0022020f0 0xc002202108] [0xba6c50 0xba6c50] 0xc0028a32c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:40:13.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:40:13.869: INFO: rc: 1 Jan 7 13:40:13.870: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001216300 exit status 1 true [0xc00303e0a8 0xc00303e0c0 0xc00303e0d8] [0xc00303e0a8 0xc00303e0c0 0xc00303e0d8] [0xc00303e0b8 0xc00303e0d0] [0xba6c50 0xba6c50] 0xc002c25d40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:40:23.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:40:24.023: INFO: rc: 1 Jan 7 13:40:24.024: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0026bc030 exit status 1 true [0xc0021c0000 0xc0021c0018 0xc0021c0030] [0xc0021c0000 0xc0021c0018 0xc0021c0030] [0xc0021c0010 0xc0021c0028] [0xba6c50 0xba6c50] 0xc002271560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:40:34.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:40:34.160: INFO: rc: 1 Jan 7 13:40:34.161: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0012160c0 exit status 1 true [0xc002202000 0xc002202018 0xc002202030] [0xc002202000 0xc002202018 0xc002202030] [0xc002202010 0xc002202028] [0xba6c50 0xba6c50] 0xc0023dcb40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:40:44.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:40:44.321: INFO: rc: 1 Jan 7 13:40:44.322: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0012161b0 exit status 1 true [0xc002202038 0xc002202050 0xc002202068] [0xc002202038 0xc002202050 0xc002202068] [0xc002202048 0xc002202060] [0xba6c50 0xba6c50] 0xc001d42fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:40:54.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:40:54.552: INFO: rc: 1 Jan 7 13:40:54.554: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0012162a0 exit status 1 true [0xc002202070 0xc002202088 0xc0022020a0] [0xc002202070 0xc002202088 0xc0022020a0] [0xc002202080 0xc002202098] [0xba6c50 0xba6c50] 0xc0025da480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:41:04.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:41:04.834: INFO: rc: 1 Jan 7 13:41:04.835: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001216390 exit status 1 true [0xc0022020a8 0xc0022020c0 0xc0022020d8] [0xc0022020a8 0xc0022020c0 0xc0022020d8] [0xc0022020b8 0xc0022020d0] [0xba6c50 0xba6c50] 0xc0025da960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:41:14.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:41:14.982: INFO: rc: 1 Jan 7 13:41:14.983: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000f7a0f0 exit status 1 true [0xc00303e000 0xc00303e018 0xc00303e030] [0xc00303e000 0xc00303e018 0xc00303e030] [0xc00303e010 0xc00303e028] [0xba6c50 0xba6c50] 0xc0028a2600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:41:24.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:41:25.162: INFO: rc: 1 Jan 7 13:41:25.163: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000f7a240 exit status 1 true [0xc00303e038 0xc00303e050 0xc00303e068] [0xc00303e038 0xc00303e050 0xc00303e068] [0xc00303e048 0xc00303e060] [0xba6c50 0xba6c50] 0xc0028a2ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:41:35.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:41:35.361: INFO: rc: 1 Jan 7 13:41:35.362: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0026bc180 exit status 1 true [0xc0021c0038 0xc0021c0050 0xc0021c0068] [0xc0021c0038 0xc0021c0050 0xc0021c0068] [0xc0021c0048 0xc0021c0060] [0xba6c50 0xba6c50] 0xc002c24000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:41:45.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:41:45.542: INFO: rc: 1 Jan 7 13:41:45.542: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000f7a330 exit status 1 true [0xc00303e070 0xc00303e088 0xc00303e0a0] [0xc00303e070 0xc00303e088 0xc00303e0a0] [0xc00303e080 0xc00303e098] [0xba6c50 0xba6c50] 0xc0028a3260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:41:55.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:41:55.735: INFO: rc: 1 Jan 7 13:41:55.736: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000f7a420 exit status 1 true [0xc00303e0a8 0xc00303e0c0 0xc00303e0d8] [0xc00303e0a8 0xc00303e0c0 0xc00303e0d8] [0xc00303e0b8 0xc00303e0d0] [0xba6c50 0xba6c50] 0xc0028a36e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:42:05.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:42:05.934: INFO: rc: 1 Jan 7 13:42:05.934: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001216480 exit status 1 true [0xc0022020e0 0xc0022020f8 0xc002202110] [0xc0022020e0 0xc0022020f8 0xc002202110] [0xc0022020f0 0xc002202108] [0xba6c50 0xba6c50] 0xc0025dad80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:42:15.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7993 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:42:16.219: INFO: rc: 1 Jan 7 13:42:16.219: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Jan 7 13:42:16.220: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 7 13:42:16.244: INFO: Deleting all statefulset in ns statefulset-7993 Jan 7 13:42:16.253: INFO: Scaling statefulset ss to 0 Jan 7 13:42:16.268: INFO: Waiting for statefulset status.replicas updated to 0 Jan 7 13:42:16.272: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:42:16.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7993" for this suite. Jan 7 13:42:22.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:42:22.532: INFO: namespace statefulset-7993 deletion completed in 6.210747973s • [SLOW TEST:390.589 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:42:22.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-458a07a1-f908-40b6-a60d-ef65ae05de00 STEP: Creating a pod to test consume configMaps Jan 7 13:42:22.690: INFO: Waiting up to 5m0s for pod "pod-configmaps-6aef46dd-b8fd-45af-a101-e8fda8c908c9" in namespace "configmap-5408" to be "success or failure" Jan 7 13:42:22.705: INFO: Pod "pod-configmaps-6aef46dd-b8fd-45af-a101-e8fda8c908c9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.351888ms Jan 7 13:42:24.718: INFO: Pod "pod-configmaps-6aef46dd-b8fd-45af-a101-e8fda8c908c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028017668s Jan 7 13:42:26.730: INFO: Pod "pod-configmaps-6aef46dd-b8fd-45af-a101-e8fda8c908c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03997356s Jan 7 13:42:28.748: INFO: Pod "pod-configmaps-6aef46dd-b8fd-45af-a101-e8fda8c908c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058020647s Jan 7 13:42:30.760: INFO: Pod "pod-configmaps-6aef46dd-b8fd-45af-a101-e8fda8c908c9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069340196s Jan 7 13:42:32.776: INFO: Pod "pod-configmaps-6aef46dd-b8fd-45af-a101-e8fda8c908c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.085917274s STEP: Saw pod success Jan 7 13:42:32.777: INFO: Pod "pod-configmaps-6aef46dd-b8fd-45af-a101-e8fda8c908c9" satisfied condition "success or failure" Jan 7 13:42:32.782: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6aef46dd-b8fd-45af-a101-e8fda8c908c9 container configmap-volume-test: STEP: delete the pod Jan 7 13:42:32.914: INFO: Waiting for pod pod-configmaps-6aef46dd-b8fd-45af-a101-e8fda8c908c9 to disappear Jan 7 13:42:32.922: INFO: Pod pod-configmaps-6aef46dd-b8fd-45af-a101-e8fda8c908c9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:42:32.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5408" for this suite. Jan 7 13:42:38.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:42:39.122: INFO: namespace configmap-5408 deletion completed in 6.193352844s • [SLOW TEST:16.588 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:42:39.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 7 13:42:49.947: INFO: Successfully updated pod "pod-update-activedeadlineseconds-f5ef0608-ba50-41e8-b609-cd4ae213d19a" Jan 7 13:42:49.947: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-f5ef0608-ba50-41e8-b609-cd4ae213d19a" in namespace "pods-648" to be "terminated due to deadline exceeded" Jan 7 13:42:49.974: INFO: Pod "pod-update-activedeadlineseconds-f5ef0608-ba50-41e8-b609-cd4ae213d19a": Phase="Running", Reason="", readiness=true. Elapsed: 26.193535ms Jan 7 13:42:51.981: INFO: Pod "pod-update-activedeadlineseconds-f5ef0608-ba50-41e8-b609-cd4ae213d19a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.033942484s Jan 7 13:42:51.981: INFO: Pod "pod-update-activedeadlineseconds-f5ef0608-ba50-41e8-b609-cd4ae213d19a" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:42:51.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-648" for this suite. Jan 7 13:42:58.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:42:58.166: INFO: namespace pods-648 deletion completed in 6.178648937s • [SLOW TEST:19.044 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:42:58.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-899cc103-683d-4bad-bc61-5922521e88c7 STEP: Creating secret with name secret-projected-all-test-volume-dbe39b46-295c-4ddc-843b-589013cdeccf STEP: Creating a pod to test Check all projections for projected volume plugin Jan 7 13:42:58.288: INFO: Waiting up to 5m0s for pod "projected-volume-235c68d5-91da-45e0-890e-4473bfe848d9" in namespace "projected-8226" to be "success or failure" Jan 7 13:42:58.297: INFO: Pod "projected-volume-235c68d5-91da-45e0-890e-4473bfe848d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.607494ms Jan 7 13:43:00.311: INFO: Pod "projected-volume-235c68d5-91da-45e0-890e-4473bfe848d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022406723s Jan 7 13:43:02.318: INFO: Pod "projected-volume-235c68d5-91da-45e0-890e-4473bfe848d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029839712s Jan 7 13:43:04.328: INFO: Pod "projected-volume-235c68d5-91da-45e0-890e-4473bfe848d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039037149s Jan 7 13:43:06.341: INFO: Pod "projected-volume-235c68d5-91da-45e0-890e-4473bfe848d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052371234s Jan 7 13:43:08.353: INFO: Pod "projected-volume-235c68d5-91da-45e0-890e-4473bfe848d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064162854s STEP: Saw pod success Jan 7 13:43:08.353: INFO: Pod "projected-volume-235c68d5-91da-45e0-890e-4473bfe848d9" satisfied condition "success or failure" Jan 7 13:43:08.358: INFO: Trying to get logs from node iruya-node pod projected-volume-235c68d5-91da-45e0-890e-4473bfe848d9 container projected-all-volume-test: STEP: delete the pod Jan 7 13:43:08.529: INFO: Waiting for pod projected-volume-235c68d5-91da-45e0-890e-4473bfe848d9 to disappear Jan 7 13:43:08.540: INFO: Pod projected-volume-235c68d5-91da-45e0-890e-4473bfe848d9 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:43:08.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8226" for this suite. Jan 7 13:43:14.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:43:14.691: INFO: namespace projected-8226 deletion completed in 6.145092267s • [SLOW TEST:16.525 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:43:14.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 7 13:43:14.763: INFO: PodSpec: initContainers in spec.initContainers Jan 7 13:44:22.777: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-62ed7c41-5117-4e9c-b989-123ab1424cc1", GenerateName:"", Namespace:"init-container-6534", SelfLink:"/api/v1/namespaces/init-container-6534/pods/pod-init-62ed7c41-5117-4e9c-b989-123ab1424cc1", UID:"3791eb75-b97e-4f49-a315-dcccc9c901b1", ResourceVersion:"19653047", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714001394, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"763005575"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-jhhr7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001498b00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jhhr7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jhhr7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jhhr7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00033c5b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0025b0c60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00033c6e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00033c700)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00033c708), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00033c70c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001394, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001394, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001394, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001394, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc001037c00), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001917260)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0019172d0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://643c4261d1e0734c353e22ee9528cb026e7d4e5a18f230c6785991b0bac6bd3f"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001037c40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001037c20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:44:22.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6534" for this suite. Jan 7 13:44:44.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:44:45.131: INFO: namespace init-container-6534 deletion completed in 22.312920867s • [SLOW TEST:90.439 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:44:45.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 7 13:44:45.194: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 7 13:44:45.231: INFO: Waiting for terminating namespaces to be deleted... Jan 7 13:44:45.235: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 7 13:44:45.260: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 7 13:44:45.260: INFO: Container kube-proxy ready: true, restart count 0 Jan 7 13:44:45.260: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 7 13:44:45.260: INFO: Container weave ready: true, restart count 0 Jan 7 13:44:45.260: INFO: Container weave-npc ready: true, restart count 0 Jan 7 13:44:45.260: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 7 13:44:45.278: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 7 13:44:45.279: INFO: Container kube-scheduler ready: true, restart count 12 Jan 7 13:44:45.279: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 7 13:44:45.279: INFO: Container coredns ready: true, restart count 0 Jan 7 13:44:45.279: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 7 13:44:45.279: INFO: Container etcd ready: true, restart count 0 Jan 7 13:44:45.279: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 7 13:44:45.279: INFO: Container weave ready: true, restart count 0 Jan 7 13:44:45.279: INFO: Container weave-npc ready: true, restart count 0 Jan 7 13:44:45.279: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 7 13:44:45.279: INFO: Container coredns ready: true, restart count 0 Jan 7 13:44:45.279: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 7 13:44:45.279: INFO: Container kube-controller-manager ready: true, restart count 18 Jan 7 13:44:45.279: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 7 13:44:45.279: INFO: Container kube-proxy ready: true, restart count 0 Jan 7 13:44:45.279: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 7 13:44:45.279: INFO: Container kube-apiserver ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-node STEP: verifying the node has the label node iruya-server-sfge57q7djm7 Jan 7 13:44:45.396: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 7 13:44:45.396: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 7 13:44:45.396: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Jan 7 13:44:45.396: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7 Jan 7 13:44:45.396: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7 Jan 7 13:44:45.396: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Jan 7 13:44:45.396: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node Jan 7 13:44:45.396: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 7 13:44:45.396: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7 Jan 7 13:44:45.396: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-40254139-3534-49ec-919d-264666b15c9d.15e79eb2da52ad51], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2822/filler-pod-40254139-3534-49ec-919d-264666b15c9d to iruya-server-sfge57q7djm7] STEP: Considering event: Type = [Normal], Name = [filler-pod-40254139-3534-49ec-919d-264666b15c9d.15e79eb3fdcc5f97], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-40254139-3534-49ec-919d-264666b15c9d.15e79eb4cff4b382], Reason = [Created], Message = [Created container filler-pod-40254139-3534-49ec-919d-264666b15c9d] STEP: Considering event: Type = [Normal], Name = [filler-pod-40254139-3534-49ec-919d-264666b15c9d.15e79eb4f0bc0fc4], Reason = [Started], Message = [Started container filler-pod-40254139-3534-49ec-919d-264666b15c9d] STEP: Considering event: Type = [Normal], Name = [filler-pod-daea99be-f74e-4741-a8cd-dc1d72a81b6a.15e79eb2e0ec06ac], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2822/filler-pod-daea99be-f74e-4741-a8cd-dc1d72a81b6a to iruya-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-daea99be-f74e-4741-a8cd-dc1d72a81b6a.15e79eb4254ba6b9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-daea99be-f74e-4741-a8cd-dc1d72a81b6a.15e79eb503eda85e], Reason = [Created], Message = [Created container filler-pod-daea99be-f74e-4741-a8cd-dc1d72a81b6a] STEP: Considering event: Type = [Normal], Name = [filler-pod-daea99be-f74e-4741-a8cd-dc1d72a81b6a.15e79eb525767580], Reason = [Started], Message = [Started container filler-pod-daea99be-f74e-4741-a8cd-dc1d72a81b6a] STEP: Considering event: Type = [Warning], Name = [additional-pod.15e79eb5b0775db9], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node iruya-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-server-sfge57q7djm7 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:44:58.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2822" for this suite. Jan 7 13:45:06.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:45:06.973: INFO: namespace sched-pred-2822 deletion completed in 8.156930282s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:21.842 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:45:06.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 7 13:45:07.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1284' Jan 7 13:45:08.173: INFO: stderr: "" Jan 7 13:45:08.174: INFO: stdout: "replicationcontroller/redis-master created\n" Jan 7 13:45:08.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1284' Jan 7 13:45:08.804: INFO: stderr: "" Jan 7 13:45:08.804: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jan 7 13:45:09.815: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:45:09.815: INFO: Found 0 / 1 Jan 7 13:45:10.816: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:45:10.816: INFO: Found 0 / 1 Jan 7 13:45:11.822: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:45:11.823: INFO: Found 0 / 1 Jan 7 13:45:12.824: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:45:12.824: INFO: Found 0 / 1 Jan 7 13:45:13.824: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:45:13.824: INFO: Found 0 / 1 Jan 7 13:45:14.812: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:45:14.812: INFO: Found 0 / 1 Jan 7 13:45:15.817: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:45:15.818: INFO: Found 0 / 1 Jan 7 13:45:16.848: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:45:16.848: INFO: Found 1 / 1 Jan 7 13:45:16.848: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 7 13:45:16.855: INFO: Selector matched 1 pods for map[app:redis] Jan 7 13:45:16.855: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 7 13:45:16.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-rfkzk --namespace=kubectl-1284' Jan 7 13:45:17.007: INFO: stderr: "" Jan 7 13:45:17.007: INFO: stdout: "Name: redis-master-rfkzk\nNamespace: kubectl-1284\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Tue, 07 Jan 2020 13:45:08 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://1871a0e84aed75c9c0afded16a17fe3d23f7373256d17ceb343576d8aaa475a0\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 07 Jan 2020 13:45:15 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-ldb88 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-ldb88:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-ldb88\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 8s default-scheduler Successfully assigned kubectl-1284/redis-master-rfkzk to iruya-node\n Normal Pulled 4s kubelet, iruya-node Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-node Created container redis-master\n Normal Started 1s kubelet, iruya-node Started container redis-master\n" Jan 7 13:45:17.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-1284' Jan 7 13:45:17.125: INFO: stderr: "" Jan 7 13:45:17.125: INFO: stdout: "Name: redis-master\nNamespace: kubectl-1284\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 9s replication-controller Created pod: redis-master-rfkzk\n" Jan 7 13:45:17.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-1284' Jan 7 13:45:17.302: INFO: stderr: "" Jan 7 13:45:17.303: INFO: stdout: "Name: redis-master\nNamespace: kubectl-1284\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.100.182.40\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Jan 7 13:45:17.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node' Jan 7 13:45:17.451: INFO: stderr: "" Jan 7 13:45:17.451: INFO: stdout: "Name: iruya-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Aug 2019 09:01:39 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 12 Oct 2019 11:56:49 +0000 Sat, 12 Oct 2019 11:56:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Tue, 07 Jan 2020 13:44:46 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 07 Jan 2020 13:44:46 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 07 Jan 2020 13:44:46 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 07 Jan 2020 13:44:46 +0000 Sun, 04 Aug 2019 09:02:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.3.65\n Hostname: iruya-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: f573dcf04d6f4a87856a35d266a2fa7a\n System UUID: F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID: 8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.15.1\n Kube-Proxy Version: v1.15.1\nPodCIDR: 10.96.1.0/24\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-976zl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 156d\n kube-system weave-net-rlp57 20m (0%) 0 (0%) 0 (0%) 0 (0%) 87d\n kubectl-1284 redis-master-rfkzk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jan 7 13:45:17.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1284' Jan 7 13:45:17.601: INFO: stderr: "" Jan 7 13:45:17.601: INFO: stdout: "Name: kubectl-1284\nLabels: e2e-framework=kubectl\n e2e-run=de7d5091-86d1-456b-9724-fdd4601f6236\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:45:17.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1284" for this suite. Jan 7 13:45:39.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:45:39.805: INFO: namespace kubectl-1284 deletion completed in 22.198814161s • [SLOW TEST:32.830 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:45:39.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 7 13:45:39.987: INFO: Waiting up to 5m0s for pod "downward-api-0d92b998-e529-4ae9-8cd3-c645a280c9f5" in namespace "downward-api-4412" to be "success or failure" Jan 7 13:45:40.026: INFO: Pod "downward-api-0d92b998-e529-4ae9-8cd3-c645a280c9f5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.658387ms Jan 7 13:45:42.035: INFO: Pod "downward-api-0d92b998-e529-4ae9-8cd3-c645a280c9f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047744346s Jan 7 13:45:44.058: INFO: Pod "downward-api-0d92b998-e529-4ae9-8cd3-c645a280c9f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070421791s Jan 7 13:45:46.072: INFO: Pod "downward-api-0d92b998-e529-4ae9-8cd3-c645a280c9f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08439133s Jan 7 13:45:48.091: INFO: Pod "downward-api-0d92b998-e529-4ae9-8cd3-c645a280c9f5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103512837s Jan 7 13:45:50.100: INFO: Pod "downward-api-0d92b998-e529-4ae9-8cd3-c645a280c9f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.113372451s STEP: Saw pod success Jan 7 13:45:50.101: INFO: Pod "downward-api-0d92b998-e529-4ae9-8cd3-c645a280c9f5" satisfied condition "success or failure" Jan 7 13:45:50.105: INFO: Trying to get logs from node iruya-node pod downward-api-0d92b998-e529-4ae9-8cd3-c645a280c9f5 container dapi-container: STEP: delete the pod Jan 7 13:45:50.167: INFO: Waiting for pod downward-api-0d92b998-e529-4ae9-8cd3-c645a280c9f5 to disappear Jan 7 13:45:50.234: INFO: Pod downward-api-0d92b998-e529-4ae9-8cd3-c645a280c9f5 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:45:50.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4412" for this suite. Jan 7 13:45:56.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:45:56.524: INFO: namespace downward-api-4412 deletion completed in 6.278037442s • [SLOW TEST:16.718 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:45:56.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-fb64c2fe-4a39-46db-89f0-de4ef2ad03e6 STEP: Creating a pod to test consume configMaps Jan 7 13:45:56.661: INFO: Waiting up to 5m0s for pod "pod-configmaps-e5fe407d-4189-4438-8634-a485160cb3bf" in namespace "configmap-132" to be "success or failure" Jan 7 13:45:56.676: INFO: Pod "pod-configmaps-e5fe407d-4189-4438-8634-a485160cb3bf": Phase="Pending", Reason="", readiness=false. Elapsed: 13.608654ms Jan 7 13:45:58.685: INFO: Pod "pod-configmaps-e5fe407d-4189-4438-8634-a485160cb3bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02280721s Jan 7 13:46:00.702: INFO: Pod "pod-configmaps-e5fe407d-4189-4438-8634-a485160cb3bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040338459s Jan 7 13:46:02.721: INFO: Pod "pod-configmaps-e5fe407d-4189-4438-8634-a485160cb3bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058669484s Jan 7 13:46:04.729: INFO: Pod "pod-configmaps-e5fe407d-4189-4438-8634-a485160cb3bf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067333226s Jan 7 13:46:06.740: INFO: Pod "pod-configmaps-e5fe407d-4189-4438-8634-a485160cb3bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.077452s STEP: Saw pod success Jan 7 13:46:06.740: INFO: Pod "pod-configmaps-e5fe407d-4189-4438-8634-a485160cb3bf" satisfied condition "success or failure" Jan 7 13:46:06.747: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e5fe407d-4189-4438-8634-a485160cb3bf container configmap-volume-test: STEP: delete the pod Jan 7 13:46:06.833: INFO: Waiting for pod pod-configmaps-e5fe407d-4189-4438-8634-a485160cb3bf to disappear Jan 7 13:46:06.838: INFO: Pod pod-configmaps-e5fe407d-4189-4438-8634-a485160cb3bf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:46:06.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-132" for this suite. Jan 7 13:46:12.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:46:13.028: INFO: namespace configmap-132 deletion completed in 6.171300073s • [SLOW TEST:16.503 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:46:13.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 7 13:46:33.441: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 7 13:46:33.460: INFO: Pod pod-with-prestop-http-hook still exists Jan 7 13:46:35.461: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 7 13:46:35.474: INFO: Pod pod-with-prestop-http-hook still exists Jan 7 13:46:37.461: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 7 13:46:37.474: INFO: Pod pod-with-prestop-http-hook still exists Jan 7 13:46:39.461: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 7 13:46:39.471: INFO: Pod pod-with-prestop-http-hook still exists Jan 7 13:46:41.461: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 7 13:46:41.471: INFO: Pod pod-with-prestop-http-hook still exists Jan 7 13:46:43.461: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 7 13:46:43.471: INFO: Pod pod-with-prestop-http-hook still exists Jan 7 13:46:45.461: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 7 13:46:45.471: INFO: Pod pod-with-prestop-http-hook still exists Jan 7 13:46:47.461: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 7 13:46:47.690: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:46:47.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6014" for this suite. Jan 7 13:47:09.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:47:10.057: INFO: namespace container-lifecycle-hook-6014 deletion completed in 22.303146479s • [SLOW TEST:57.028 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:47:10.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-e14425f8-f685-4c88-a83e-6a69f7e42d56 Jan 7 13:47:10.158: INFO: Pod name my-hostname-basic-e14425f8-f685-4c88-a83e-6a69f7e42d56: Found 0 pods out of 1 Jan 7 13:47:15.175: INFO: Pod name my-hostname-basic-e14425f8-f685-4c88-a83e-6a69f7e42d56: Found 1 pods out of 1 Jan 7 13:47:15.176: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e14425f8-f685-4c88-a83e-6a69f7e42d56" are running Jan 7 13:47:19.198: INFO: Pod "my-hostname-basic-e14425f8-f685-4c88-a83e-6a69f7e42d56-8mdhp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-07 13:47:10 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-07 13:47:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e14425f8-f685-4c88-a83e-6a69f7e42d56]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-07 13:47:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e14425f8-f685-4c88-a83e-6a69f7e42d56]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-07 13:47:10 +0000 UTC Reason: Message:}]) Jan 7 13:47:19.199: INFO: Trying to dial the pod Jan 7 13:47:24.231: INFO: Controller my-hostname-basic-e14425f8-f685-4c88-a83e-6a69f7e42d56: Got expected result from replica 1 [my-hostname-basic-e14425f8-f685-4c88-a83e-6a69f7e42d56-8mdhp]: "my-hostname-basic-e14425f8-f685-4c88-a83e-6a69f7e42d56-8mdhp", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:47:24.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4229" for this suite. Jan 7 13:47:30.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:47:30.453: INFO: namespace replication-controller-4229 deletion completed in 6.216313941s • [SLOW TEST:20.396 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:47:30.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-c50563e8-a880-432a-a2ae-0d08e70845e2 STEP: Creating a pod to test consume configMaps Jan 7 13:47:30.696: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9a313ddb-d915-4ce8-8675-ece97b933662" in namespace "projected-641" to be "success or failure" Jan 7 13:47:30.708: INFO: Pod "pod-projected-configmaps-9a313ddb-d915-4ce8-8675-ece97b933662": Phase="Pending", Reason="", readiness=false. Elapsed: 11.411363ms Jan 7 13:47:32.721: INFO: Pod "pod-projected-configmaps-9a313ddb-d915-4ce8-8675-ece97b933662": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023636212s Jan 7 13:47:34.733: INFO: Pod "pod-projected-configmaps-9a313ddb-d915-4ce8-8675-ece97b933662": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036099262s Jan 7 13:47:36.740: INFO: Pod "pod-projected-configmaps-9a313ddb-d915-4ce8-8675-ece97b933662": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042970806s Jan 7 13:47:38.748: INFO: Pod "pod-projected-configmaps-9a313ddb-d915-4ce8-8675-ece97b933662": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051310286s Jan 7 13:47:40.756: INFO: Pod "pod-projected-configmaps-9a313ddb-d915-4ce8-8675-ece97b933662": Phase="Pending", Reason="", readiness=false. Elapsed: 10.059144448s Jan 7 13:47:43.078: INFO: Pod "pod-projected-configmaps-9a313ddb-d915-4ce8-8675-ece97b933662": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.381363827s STEP: Saw pod success Jan 7 13:47:43.079: INFO: Pod "pod-projected-configmaps-9a313ddb-d915-4ce8-8675-ece97b933662" satisfied condition "success or failure" Jan 7 13:47:43.087: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-9a313ddb-d915-4ce8-8675-ece97b933662 container projected-configmap-volume-test: STEP: delete the pod Jan 7 13:47:43.177: INFO: Waiting for pod pod-projected-configmaps-9a313ddb-d915-4ce8-8675-ece97b933662 to disappear Jan 7 13:47:43.226: INFO: Pod pod-projected-configmaps-9a313ddb-d915-4ce8-8675-ece97b933662 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:47:43.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-641" for this suite. Jan 7 13:47:49.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:47:49.429: INFO: namespace projected-641 deletion completed in 6.191221196s • [SLOW TEST:18.974 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:47:49.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 7 13:47:57.837: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:47:57.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1190" for this suite. Jan 7 13:48:04.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:48:04.143: INFO: namespace container-runtime-1190 deletion completed in 6.233256689s • [SLOW TEST:14.713 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:48:04.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 7 13:48:04.244: INFO: Waiting up to 5m0s for pod "pod-78c0f2e8-73fe-44da-999b-5c5cad96c674" in namespace "emptydir-575" to be "success or failure" Jan 7 13:48:04.253: INFO: Pod "pod-78c0f2e8-73fe-44da-999b-5c5cad96c674": Phase="Pending", Reason="", readiness=false. Elapsed: 8.743381ms Jan 7 13:48:06.265: INFO: Pod "pod-78c0f2e8-73fe-44da-999b-5c5cad96c674": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020395472s Jan 7 13:48:08.321: INFO: Pod "pod-78c0f2e8-73fe-44da-999b-5c5cad96c674": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076737813s Jan 7 13:48:10.336: INFO: Pod "pod-78c0f2e8-73fe-44da-999b-5c5cad96c674": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091293542s Jan 7 13:48:12.347: INFO: Pod "pod-78c0f2e8-73fe-44da-999b-5c5cad96c674": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102255951s STEP: Saw pod success Jan 7 13:48:12.347: INFO: Pod "pod-78c0f2e8-73fe-44da-999b-5c5cad96c674" satisfied condition "success or failure" Jan 7 13:48:12.351: INFO: Trying to get logs from node iruya-node pod pod-78c0f2e8-73fe-44da-999b-5c5cad96c674 container test-container: STEP: delete the pod Jan 7 13:48:12.428: INFO: Waiting for pod pod-78c0f2e8-73fe-44da-999b-5c5cad96c674 to disappear Jan 7 13:48:12.434: INFO: Pod pod-78c0f2e8-73fe-44da-999b-5c5cad96c674 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:48:12.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-575" for this suite. Jan 7 13:48:18.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:48:18.674: INFO: namespace emptydir-575 deletion completed in 6.231471234s • [SLOW TEST:14.530 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:48:18.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-7009/secret-test-04e59ebd-e181-4b3b-8045-0c98a38396ea STEP: Creating a pod to test consume secrets Jan 7 13:48:19.368: INFO: Waiting up to 5m0s for pod "pod-configmaps-ac7c2917-3678-4a40-b7bb-d7516a40ea4e" in namespace "secrets-7009" to be "success or failure" Jan 7 13:48:19.380: INFO: Pod "pod-configmaps-ac7c2917-3678-4a40-b7bb-d7516a40ea4e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.300413ms Jan 7 13:48:21.399: INFO: Pod "pod-configmaps-ac7c2917-3678-4a40-b7bb-d7516a40ea4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030953563s Jan 7 13:48:23.430: INFO: Pod "pod-configmaps-ac7c2917-3678-4a40-b7bb-d7516a40ea4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061594862s Jan 7 13:48:25.438: INFO: Pod "pod-configmaps-ac7c2917-3678-4a40-b7bb-d7516a40ea4e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069819328s Jan 7 13:48:27.449: INFO: Pod "pod-configmaps-ac7c2917-3678-4a40-b7bb-d7516a40ea4e": Phase="Running", Reason="", readiness=true. Elapsed: 8.080707556s Jan 7 13:48:29.459: INFO: Pod "pod-configmaps-ac7c2917-3678-4a40-b7bb-d7516a40ea4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090468546s STEP: Saw pod success Jan 7 13:48:29.459: INFO: Pod "pod-configmaps-ac7c2917-3678-4a40-b7bb-d7516a40ea4e" satisfied condition "success or failure" Jan 7 13:48:29.464: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ac7c2917-3678-4a40-b7bb-d7516a40ea4e container env-test: STEP: delete the pod Jan 7 13:48:29.561: INFO: Waiting for pod pod-configmaps-ac7c2917-3678-4a40-b7bb-d7516a40ea4e to disappear Jan 7 13:48:29.572: INFO: Pod pod-configmaps-ac7c2917-3678-4a40-b7bb-d7516a40ea4e no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:48:29.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7009" for this suite. Jan 7 13:48:35.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:48:35.747: INFO: namespace secrets-7009 deletion completed in 6.169822256s • [SLOW TEST:17.073 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:48:35.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 7 13:48:35.903: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 7 13:48:40.913: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 7 13:48:42.927: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 7 13:48:44.938: INFO: Creating deployment "test-rollover-deployment" Jan 7 13:48:45.056: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 7 13:48:47.076: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 7 13:48:47.087: INFO: Ensure that both replica sets have 1 created replica Jan 7 13:48:47.094: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 7 13:48:47.104: INFO: Updating deployment test-rollover-deployment Jan 7 13:48:47.104: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 7 13:48:49.193: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 7 13:48:49.221: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 7 13:48:49.301: INFO: all replica sets need to contain the pod-template-hash label Jan 7 13:48:49.301: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001727, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 13:48:51.330: INFO: all replica sets need to contain the pod-template-hash label Jan 7 13:48:51.330: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001727, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 13:48:53.322: INFO: all replica sets need to contain the pod-template-hash label Jan 7 13:48:53.322: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001727, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 13:48:55.365: INFO: all replica sets need to contain the pod-template-hash label Jan 7 13:48:55.366: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001727, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 13:48:57.408: INFO: all replica sets need to contain the pod-template-hash label Jan 7 13:48:57.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001736, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 13:48:59.325: INFO: all replica sets need to contain the pod-template-hash label Jan 7 13:48:59.325: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001736, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 13:49:01.331: INFO: all replica sets need to contain the pod-template-hash label Jan 7 13:49:01.331: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001736, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 13:49:03.316: INFO: all replica sets need to contain the pod-template-hash label Jan 7 13:49:03.317: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001736, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 13:49:05.329: INFO: all replica sets need to contain the pod-template-hash label Jan 7 13:49:05.330: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001736, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714001725, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 13:49:07.322: INFO: Jan 7 13:49:07.322: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 7 13:49:07.335: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-5269,SelfLink:/apis/apps/v1/namespaces/deployment-5269/deployments/test-rollover-deployment,UID:41bc2651-486b-41c0-9c17-a26b505a7e1a,ResourceVersion:19653813,Generation:2,CreationTimestamp:2020-01-07 13:48:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-07 13:48:45 +0000 UTC 2020-01-07 13:48:45 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-07 13:49:06 +0000 UTC 2020-01-07 13:48:45 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 7 13:49:07.340: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-5269,SelfLink:/apis/apps/v1/namespaces/deployment-5269/replicasets/test-rollover-deployment-854595fc44,UID:fd8b833a-be04-4197-a160-7d126a88a587,ResourceVersion:19653803,Generation:2,CreationTimestamp:2020-01-07 13:48:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 41bc2651-486b-41c0-9c17-a26b505a7e1a 0xc002557687 0xc002557688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 7 13:49:07.340: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 7 13:49:07.340: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-5269,SelfLink:/apis/apps/v1/namespaces/deployment-5269/replicasets/test-rollover-controller,UID:a0e8b83c-01b7-45e7-92d6-c15c9f4f86c4,ResourceVersion:19653811,Generation:2,CreationTimestamp:2020-01-07 13:48:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 41bc2651-486b-41c0-9c17-a26b505a7e1a 0xc0025572e7 0xc0025572e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 7 13:49:07.340: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-5269,SelfLink:/apis/apps/v1/namespaces/deployment-5269/replicasets/test-rollover-deployment-9b8b997cf,UID:d705f19a-5881-4695-b188-1e78cd508d28,ResourceVersion:19653767,Generation:2,CreationTimestamp:2020-01-07 13:48:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 41bc2651-486b-41c0-9c17-a26b505a7e1a 0xc002557760 0xc002557761}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 7 13:49:07.344: INFO: Pod "test-rollover-deployment-854595fc44-gczwd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-gczwd,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-5269,SelfLink:/api/v1/namespaces/deployment-5269/pods/test-rollover-deployment-854595fc44-gczwd,UID:785179d1-6935-4806-87dd-ef334139d1ee,ResourceVersion:19653787,Generation:0,CreationTimestamp:2020-01-07 13:48:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 fd8b833a-be04-4197-a160-7d126a88a587 0xc000947d37 0xc000947d38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2sbtt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2sbtt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-2sbtt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000947dd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000947df0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:48:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:48:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:48:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:48:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-07 13:48:47 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-07 13:48:56 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://75675b84b0b30f30653f3e3d7fe80a17ff3779275f6a4b7b8d795ca4c0f4a089}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:49:07.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5269" for this suite. Jan 7 13:49:15.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:49:15.594: INFO: namespace deployment-5269 deletion completed in 8.246168187s • [SLOW TEST:39.847 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:49:15.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 7 13:49:15.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5712' Jan 7 13:49:18.201: INFO: stderr: "" Jan 7 13:49:18.201: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Jan 7 13:49:18.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-5712' Jan 7 13:49:23.080: INFO: stderr: "" Jan 7 13:49:23.080: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:49:23.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5712" for this suite. Jan 7 13:49:29.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:49:29.289: INFO: namespace kubectl-5712 deletion completed in 6.195696691s • [SLOW TEST:13.694 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:49:29.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Jan 7 13:49:29.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2043' Jan 7 13:49:29.957: INFO: stderr: "" Jan 7 13:49:29.957: INFO: stdout: "pod/pause created\n" Jan 7 13:49:29.957: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 7 13:49:29.958: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2043" to be "running and ready" Jan 7 13:49:29.960: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.819709ms Jan 7 13:49:31.968: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010673871s Jan 7 13:49:33.983: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025168988s Jan 7 13:49:35.988: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030155611s Jan 7 13:49:38.011: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053357458s Jan 7 13:49:40.016: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.058970845s Jan 7 13:49:40.017: INFO: Pod "pause" satisfied condition "running and ready" Jan 7 13:49:40.017: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Jan 7 13:49:40.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2043' Jan 7 13:49:40.233: INFO: stderr: "" Jan 7 13:49:40.233: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 7 13:49:40.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2043' Jan 7 13:49:40.366: INFO: stderr: "" Jan 7 13:49:40.367: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 7 13:49:40.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2043' Jan 7 13:49:40.462: INFO: stderr: "" Jan 7 13:49:40.463: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 7 13:49:40.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2043' Jan 7 13:49:40.573: INFO: stderr: "" Jan 7 13:49:40.573: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Jan 7 13:49:40.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2043' Jan 7 13:49:40.726: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 7 13:49:40.726: INFO: stdout: "pod \"pause\" force deleted\n" Jan 7 13:49:40.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2043' Jan 7 13:49:40.958: INFO: stderr: "No resources found.\n" Jan 7 13:49:40.958: INFO: stdout: "" Jan 7 13:49:40.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2043 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 7 13:49:41.063: INFO: stderr: "" Jan 7 13:49:41.063: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:49:41.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2043" for this suite. Jan 7 13:49:47.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:49:47.253: INFO: namespace kubectl-2043 deletion completed in 6.177576362s • [SLOW TEST:17.964 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:49:47.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7931, will wait for the garbage collector to delete the pods Jan 7 13:49:57.497: INFO: Deleting Job.batch foo took: 28.973108ms Jan 7 13:49:57.798: INFO: Terminating Job.batch foo pods took: 300.765665ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:50:46.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7931" for this suite. Jan 7 13:50:52.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:50:52.767: INFO: namespace job-7931 deletion completed in 6.150890121s • [SLOW TEST:65.514 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:50:52.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 7 13:50:52.894: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28b81230-fd87-4c3a-80f9-aaefe809a972" in namespace "downward-api-19" to be "success or failure" Jan 7 13:50:52.901: INFO: Pod "downwardapi-volume-28b81230-fd87-4c3a-80f9-aaefe809a972": Phase="Pending", Reason="", readiness=false. Elapsed: 7.051428ms Jan 7 13:50:55.405: INFO: Pod "downwardapi-volume-28b81230-fd87-4c3a-80f9-aaefe809a972": Phase="Pending", Reason="", readiness=false. Elapsed: 2.510890276s Jan 7 13:50:57.420: INFO: Pod "downwardapi-volume-28b81230-fd87-4c3a-80f9-aaefe809a972": Phase="Pending", Reason="", readiness=false. Elapsed: 4.525900548s Jan 7 13:50:59.433: INFO: Pod "downwardapi-volume-28b81230-fd87-4c3a-80f9-aaefe809a972": Phase="Pending", Reason="", readiness=false. Elapsed: 6.538269838s Jan 7 13:51:01.445: INFO: Pod "downwardapi-volume-28b81230-fd87-4c3a-80f9-aaefe809a972": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550095533s Jan 7 13:51:03.453: INFO: Pod "downwardapi-volume-28b81230-fd87-4c3a-80f9-aaefe809a972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.558866624s STEP: Saw pod success Jan 7 13:51:03.453: INFO: Pod "downwardapi-volume-28b81230-fd87-4c3a-80f9-aaefe809a972" satisfied condition "success or failure" Jan 7 13:51:03.461: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-28b81230-fd87-4c3a-80f9-aaefe809a972 container client-container: STEP: delete the pod Jan 7 13:51:03.707: INFO: Waiting for pod downwardapi-volume-28b81230-fd87-4c3a-80f9-aaefe809a972 to disappear Jan 7 13:51:03.718: INFO: Pod downwardapi-volume-28b81230-fd87-4c3a-80f9-aaefe809a972 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:51:03.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-19" for this suite. Jan 7 13:51:09.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:51:09.916: INFO: namespace downward-api-19 deletion completed in 6.192955273s • [SLOW TEST:17.149 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:51:09.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-388e7d34-e662-455c-81cc-9157e4c65ad0 STEP: Creating a pod to test consume configMaps Jan 7 13:51:10.044: INFO: Waiting up to 5m0s for pod "pod-configmaps-b48eeadb-0d7b-44f2-8926-0a921f40001d" in namespace "configmap-5612" to be "success or failure" Jan 7 13:51:10.085: INFO: Pod "pod-configmaps-b48eeadb-0d7b-44f2-8926-0a921f40001d": Phase="Pending", Reason="", readiness=false. Elapsed: 41.374458ms Jan 7 13:51:12.096: INFO: Pod "pod-configmaps-b48eeadb-0d7b-44f2-8926-0a921f40001d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051792467s Jan 7 13:51:14.105: INFO: Pod "pod-configmaps-b48eeadb-0d7b-44f2-8926-0a921f40001d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061285966s Jan 7 13:51:16.122: INFO: Pod "pod-configmaps-b48eeadb-0d7b-44f2-8926-0a921f40001d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077813889s Jan 7 13:51:18.134: INFO: Pod "pod-configmaps-b48eeadb-0d7b-44f2-8926-0a921f40001d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089614112s STEP: Saw pod success Jan 7 13:51:18.134: INFO: Pod "pod-configmaps-b48eeadb-0d7b-44f2-8926-0a921f40001d" satisfied condition "success or failure" Jan 7 13:51:18.139: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b48eeadb-0d7b-44f2-8926-0a921f40001d container configmap-volume-test: STEP: delete the pod Jan 7 13:51:18.247: INFO: Waiting for pod pod-configmaps-b48eeadb-0d7b-44f2-8926-0a921f40001d to disappear Jan 7 13:51:18.257: INFO: Pod pod-configmaps-b48eeadb-0d7b-44f2-8926-0a921f40001d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:51:18.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5612" for this suite. Jan 7 13:51:24.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:51:24.515: INFO: namespace configmap-5612 deletion completed in 6.244775283s • [SLOW TEST:14.598 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:51:24.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 7 13:51:24.615: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 7 13:51:29.624: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 7 13:51:33.646: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 7 13:51:33.736: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-6970,SelfLink:/apis/apps/v1/namespaces/deployment-6970/deployments/test-cleanup-deployment,UID:9b00b85c-0c06-447b-bea4-e4fb22611a3d,ResourceVersion:19654205,Generation:1,CreationTimestamp:2020-01-07 13:51:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jan 7 13:51:33.767: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-6970,SelfLink:/apis/apps/v1/namespaces/deployment-6970/replicasets/test-cleanup-deployment-55bbcbc84c,UID:17cf1918-9cba-4e3a-b8d6-105a0d815f14,ResourceVersion:19654207,Generation:1,CreationTimestamp:2020-01-07 13:51:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 9b00b85c-0c06-447b-bea4-e4fb22611a3d 0xc0027efd77 0xc0027efd78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 7 13:51:33.767: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 7 13:51:33.767: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-6970,SelfLink:/apis/apps/v1/namespaces/deployment-6970/replicasets/test-cleanup-controller,UID:35e626bf-665f-4467-b857-de564c6ca100,ResourceVersion:19654206,Generation:1,CreationTimestamp:2020-01-07 13:51:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 9b00b85c-0c06-447b-bea4-e4fb22611a3d 0xc0027efca7 0xc0027efca8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 7 13:51:33.820: INFO: Pod "test-cleanup-controller-r2xhh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-r2xhh,GenerateName:test-cleanup-controller-,Namespace:deployment-6970,SelfLink:/api/v1/namespaces/deployment-6970/pods/test-cleanup-controller-r2xhh,UID:56ecfb9a-bfcb-4cea-a55a-88bea33b1c45,ResourceVersion:19654203,Generation:0,CreationTimestamp:2020-01-07 13:51:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 35e626bf-665f-4467-b857-de564c6ca100 0xc0032135f7 0xc0032135f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nfdzw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nfdzw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nfdzw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003213670} {node.kubernetes.io/unreachable Exists NoExecute 0xc003213690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:51:24 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:51:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:51:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:51:24 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-07 13:51:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-07 13:51:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://38cd349ebff181be0815198d33fcd0cb3a5b6baa4c9926d8efe4db7c43e613ef}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 13:51:33.822: INFO: Pod "test-cleanup-deployment-55bbcbc84c-9nhz7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-9nhz7,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-6970,SelfLink:/api/v1/namespaces/deployment-6970/pods/test-cleanup-deployment-55bbcbc84c-9nhz7,UID:9a8b41c8-2c9a-43a4-aa0a-4364907530a2,ResourceVersion:19654213,Generation:0,CreationTimestamp:2020-01-07 13:51:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 17cf1918-9cba-4e3a-b8d6-105a0d815f14 0xc003213777 0xc003213778}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nfdzw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nfdzw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-nfdzw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0032137f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003213810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:51:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:51:33.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6970" for this suite. Jan 7 13:51:39.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:51:40.034: INFO: namespace deployment-6970 deletion completed in 6.157070373s • [SLOW TEST:15.520 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:51:40.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-3995 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3995 to expose endpoints map[] Jan 7 13:51:40.552: INFO: Get endpoints failed (154.020266ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 7 13:51:41.562: INFO: successfully validated that service multi-endpoint-test in namespace services-3995 exposes endpoints map[] (1.163645526s elapsed) STEP: Creating pod pod1 in namespace services-3995 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3995 to expose endpoints map[pod1:[100]] Jan 7 13:51:45.756: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.174159184s elapsed, will retry) Jan 7 13:51:50.925: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (9.343690109s elapsed, will retry) Jan 7 13:51:52.956: INFO: successfully validated that service multi-endpoint-test in namespace services-3995 exposes endpoints map[pod1:[100]] (11.374470971s elapsed) STEP: Creating pod pod2 in namespace services-3995 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3995 to expose endpoints map[pod1:[100] pod2:[101]] Jan 7 13:51:58.242: INFO: Unexpected endpoints: found map[6000e21f-4fe3-48c2-945f-65a657d3a656:[100]], expected map[pod1:[100] pod2:[101]] (5.266662131s elapsed, will retry) Jan 7 13:52:01.401: INFO: successfully validated that service multi-endpoint-test in namespace services-3995 exposes endpoints map[pod1:[100] pod2:[101]] (8.426176126s elapsed) STEP: Deleting pod pod1 in namespace services-3995 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3995 to expose endpoints map[pod2:[101]] Jan 7 13:52:02.497: INFO: successfully validated that service multi-endpoint-test in namespace services-3995 exposes endpoints map[pod2:[101]] (1.082459811s elapsed) STEP: Deleting pod pod2 in namespace services-3995 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3995 to expose endpoints map[] Jan 7 13:52:02.594: INFO: successfully validated that service multi-endpoint-test in namespace services-3995 exposes endpoints map[] (68.443368ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:52:02.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3995" for this suite. Jan 7 13:52:08.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:52:08.936: INFO: namespace services-3995 deletion completed in 6.248023703s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:28.901 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:52:08.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-51 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jan 7 13:52:09.097: INFO: Found 0 stateful pods, waiting for 3 Jan 7 13:52:19.129: INFO: Found 2 stateful pods, waiting for 3 Jan 7 13:52:29.112: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 7 13:52:29.112: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 7 13:52:29.112: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 7 13:52:39.109: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 7 13:52:39.109: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 7 13:52:39.109: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 7 13:52:39.146: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 7 13:52:49.300: INFO: Updating stateful set ss2 Jan 7 13:52:49.412: INFO: Waiting for Pod statefulset-51/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jan 7 13:52:59.846: INFO: Found 2 stateful pods, waiting for 3 Jan 7 13:53:09.863: INFO: Found 2 stateful pods, waiting for 3 Jan 7 13:53:19.868: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 7 13:53:19.869: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 7 13:53:19.869: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 7 13:53:29.863: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 7 13:53:29.864: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 7 13:53:29.864: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 7 13:53:29.910: INFO: Updating stateful set ss2 Jan 7 13:53:30.024: INFO: Waiting for Pod statefulset-51/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 7 13:53:40.187: INFO: Updating stateful set ss2 Jan 7 13:53:40.583: INFO: Waiting for StatefulSet statefulset-51/ss2 to complete update Jan 7 13:53:40.583: INFO: Waiting for Pod statefulset-51/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 7 13:53:50.621: INFO: Waiting for StatefulSet statefulset-51/ss2 to complete update Jan 7 13:53:50.621: INFO: Waiting for Pod statefulset-51/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 7 13:54:00.599: INFO: Waiting for StatefulSet statefulset-51/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 7 13:54:10.602: INFO: Deleting all statefulset in ns statefulset-51 Jan 7 13:54:10.608: INFO: Scaling statefulset ss2 to 0 Jan 7 13:54:50.717: INFO: Waiting for statefulset status.replicas updated to 0 Jan 7 13:54:50.723: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:54:50.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-51" for this suite. Jan 7 13:54:58.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:54:58.942: INFO: namespace statefulset-51 deletion completed in 8.176914519s • [SLOW TEST:170.005 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:54:58.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 7 13:54:59.089: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 7 13:55:04.105: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:55:05.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4220" for this suite. Jan 7 13:55:11.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:55:11.331: INFO: namespace replication-controller-4220 deletion completed in 6.178989542s • [SLOW TEST:12.388 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:55:11.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Jan 7 13:55:11.458: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix749872212/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:55:11.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7127" for this suite. Jan 7 13:55:17.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:55:17.681: INFO: namespace kubectl-7127 deletion completed in 6.154645671s • [SLOW TEST:6.350 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:55:17.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-5eff9781-ea57-4c07-a86a-c4f16abecf5f STEP: Creating a pod to test consume configMaps Jan 7 13:55:17.981: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7ab0d0e0-c36a-421f-af53-07b7697271cf" in namespace "projected-4233" to be "success or failure" Jan 7 13:55:17.995: INFO: Pod "pod-projected-configmaps-7ab0d0e0-c36a-421f-af53-07b7697271cf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.27866ms Jan 7 13:55:20.008: INFO: Pod "pod-projected-configmaps-7ab0d0e0-c36a-421f-af53-07b7697271cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026480987s Jan 7 13:55:22.057: INFO: Pod "pod-projected-configmaps-7ab0d0e0-c36a-421f-af53-07b7697271cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075534975s Jan 7 13:55:24.070: INFO: Pod "pod-projected-configmaps-7ab0d0e0-c36a-421f-af53-07b7697271cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088329341s Jan 7 13:55:26.086: INFO: Pod "pod-projected-configmaps-7ab0d0e0-c36a-421f-af53-07b7697271cf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104792583s Jan 7 13:55:28.096: INFO: Pod "pod-projected-configmaps-7ab0d0e0-c36a-421f-af53-07b7697271cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114495378s STEP: Saw pod success Jan 7 13:55:28.096: INFO: Pod "pod-projected-configmaps-7ab0d0e0-c36a-421f-af53-07b7697271cf" satisfied condition "success or failure" Jan 7 13:55:28.101: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-7ab0d0e0-c36a-421f-af53-07b7697271cf container projected-configmap-volume-test: STEP: delete the pod Jan 7 13:55:28.233: INFO: Waiting for pod pod-projected-configmaps-7ab0d0e0-c36a-421f-af53-07b7697271cf to disappear Jan 7 13:55:28.248: INFO: Pod pod-projected-configmaps-7ab0d0e0-c36a-421f-af53-07b7697271cf no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:55:28.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4233" for this suite. Jan 7 13:55:34.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:55:34.439: INFO: namespace projected-4233 deletion completed in 6.184613396s • [SLOW TEST:16.757 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:55:34.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-9744 I0107 13:55:34.563454 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9744, replica count: 1 I0107 13:55:35.614896 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0107 13:55:36.615610 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0107 13:55:37.616213 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0107 13:55:38.617947 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0107 13:55:39.619178 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0107 13:55:40.620521 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0107 13:55:41.621573 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0107 13:55:42.622299 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0107 13:55:43.623159 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 7 13:55:43.844: INFO: Created: latency-svc-n72jp Jan 7 13:55:43.861: INFO: Got endpoints: latency-svc-n72jp [137.689904ms] Jan 7 13:55:44.017: INFO: Created: latency-svc-lsmz4 Jan 7 13:55:44.089: INFO: Got endpoints: latency-svc-lsmz4 [222.391289ms] Jan 7 13:55:44.093: INFO: Created: latency-svc-4tjjt Jan 7 13:55:44.189: INFO: Got endpoints: latency-svc-4tjjt [325.100065ms] Jan 7 13:55:44.219: INFO: Created: latency-svc-xszcl Jan 7 13:55:44.227: INFO: Got endpoints: latency-svc-xszcl [360.446077ms] Jan 7 13:55:44.283: INFO: Created: latency-svc-dwtcm Jan 7 13:55:44.355: INFO: Got endpoints: latency-svc-dwtcm [488.465085ms] Jan 7 13:55:44.383: INFO: Created: latency-svc-rzqv9 Jan 7 13:55:44.389: INFO: Got endpoints: latency-svc-rzqv9 [521.821429ms] Jan 7 13:55:44.433: INFO: Created: latency-svc-6bmrj Jan 7 13:55:44.444: INFO: Got endpoints: latency-svc-6bmrj [576.884012ms] Jan 7 13:55:44.559: INFO: Created: latency-svc-fqvg4 Jan 7 13:55:44.566: INFO: Got endpoints: latency-svc-fqvg4 [699.392174ms] Jan 7 13:55:44.638: INFO: Created: latency-svc-cnq4s Jan 7 13:55:44.701: INFO: Got endpoints: latency-svc-cnq4s [833.63639ms] Jan 7 13:55:44.730: INFO: Created: latency-svc-kstgr Jan 7 13:55:44.731: INFO: Got endpoints: latency-svc-kstgr [863.381567ms] Jan 7 13:55:44.792: INFO: Created: latency-svc-pdklh Jan 7 13:55:44.792: INFO: Got endpoints: latency-svc-pdklh [925.01985ms] Jan 7 13:55:44.887: INFO: Created: latency-svc-6k4qb Jan 7 13:55:44.892: INFO: Got endpoints: latency-svc-6k4qb [1.024938003s] Jan 7 13:55:44.947: INFO: Created: latency-svc-ldfph Jan 7 13:55:44.955: INFO: Got endpoints: latency-svc-ldfph [1.08807519s] Jan 7 13:55:45.105: INFO: Created: latency-svc-ghbp4 Jan 7 13:55:45.108: INFO: Got endpoints: latency-svc-ghbp4 [1.241801637s] Jan 7 13:55:45.160: INFO: Created: latency-svc-ctdth Jan 7 13:55:45.161: INFO: Got endpoints: latency-svc-ctdth [1.295142545s] Jan 7 13:55:45.245: INFO: Created: latency-svc-zhs4r Jan 7 13:55:45.250: INFO: Got endpoints: latency-svc-zhs4r [1.382575688s] Jan 7 13:55:45.293: INFO: Created: latency-svc-tf644 Jan 7 13:55:45.310: INFO: Got endpoints: latency-svc-tf644 [1.220734526s] Jan 7 13:55:45.423: INFO: Created: latency-svc-lj6bq Jan 7 13:55:45.436: INFO: Got endpoints: latency-svc-lj6bq [1.24721035s] Jan 7 13:55:45.634: INFO: Created: latency-svc-4vthr Jan 7 13:55:45.661: INFO: Got endpoints: latency-svc-4vthr [1.433737574s] Jan 7 13:55:45.744: INFO: Created: latency-svc-57zml Jan 7 13:55:45.747: INFO: Got endpoints: latency-svc-57zml [1.391141132s] Jan 7 13:55:45.796: INFO: Created: latency-svc-jw29k Jan 7 13:55:45.895: INFO: Got endpoints: latency-svc-jw29k [1.506490018s] Jan 7 13:55:45.898: INFO: Created: latency-svc-6ztfh Jan 7 13:55:45.956: INFO: Created: latency-svc-55qrk Jan 7 13:55:45.956: INFO: Got endpoints: latency-svc-6ztfh [1.511934129s] Jan 7 13:55:45.966: INFO: Got endpoints: latency-svc-55qrk [1.399112213s] Jan 7 13:55:46.129: INFO: Created: latency-svc-2xd7w Jan 7 13:55:46.152: INFO: Got endpoints: latency-svc-2xd7w [1.450756258s] Jan 7 13:55:46.212: INFO: Created: latency-svc-rwhk5 Jan 7 13:55:46.212: INFO: Got endpoints: latency-svc-rwhk5 [1.481266975s] Jan 7 13:55:46.359: INFO: Created: latency-svc-ww6vk Jan 7 13:55:46.373: INFO: Got endpoints: latency-svc-ww6vk [1.580512869s] Jan 7 13:55:46.477: INFO: Created: latency-svc-mmc8p Jan 7 13:55:46.490: INFO: Got endpoints: latency-svc-mmc8p [1.597683065s] Jan 7 13:55:46.568: INFO: Created: latency-svc-ml62b Jan 7 13:55:46.646: INFO: Got endpoints: latency-svc-ml62b [1.689998032s] Jan 7 13:55:46.735: INFO: Created: latency-svc-hg6vf Jan 7 13:55:46.816: INFO: Got endpoints: latency-svc-hg6vf [1.707028379s] Jan 7 13:55:46.850: INFO: Created: latency-svc-ccn9p Jan 7 13:55:46.911: INFO: Got endpoints: latency-svc-ccn9p [1.749703945s] Jan 7 13:55:46.915: INFO: Created: latency-svc-h5k75 Jan 7 13:55:46.972: INFO: Got endpoints: latency-svc-h5k75 [1.722788383s] Jan 7 13:55:47.025: INFO: Created: latency-svc-8q9qx Jan 7 13:55:47.138: INFO: Got endpoints: latency-svc-8q9qx [1.828015706s] Jan 7 13:55:47.148: INFO: Created: latency-svc-nc8t2 Jan 7 13:55:47.150: INFO: Got endpoints: latency-svc-nc8t2 [1.712888306s] Jan 7 13:55:47.214: INFO: Created: latency-svc-xv5pr Jan 7 13:55:47.230: INFO: Got endpoints: latency-svc-xv5pr [1.56906171s] Jan 7 13:55:47.372: INFO: Created: latency-svc-kdzf8 Jan 7 13:55:47.384: INFO: Got endpoints: latency-svc-kdzf8 [1.636665666s] Jan 7 13:55:47.500: INFO: Created: latency-svc-5mpk6 Jan 7 13:55:47.519: INFO: Got endpoints: latency-svc-5mpk6 [1.623692995s] Jan 7 13:55:47.629: INFO: Created: latency-svc-z7hkl Jan 7 13:55:47.636: INFO: Got endpoints: latency-svc-z7hkl [1.679314461s] Jan 7 13:55:47.702: INFO: Created: latency-svc-rllxd Jan 7 13:55:47.862: INFO: Got endpoints: latency-svc-rllxd [1.895894904s] Jan 7 13:55:47.877: INFO: Created: latency-svc-gmq9h Jan 7 13:55:47.891: INFO: Got endpoints: latency-svc-gmq9h [1.739048318s] Jan 7 13:55:47.951: INFO: Created: latency-svc-k8sjb Jan 7 13:55:48.052: INFO: Got endpoints: latency-svc-k8sjb [1.839772043s] Jan 7 13:55:48.098: INFO: Created: latency-svc-sw6r9 Jan 7 13:55:48.111: INFO: Got endpoints: latency-svc-sw6r9 [1.737449774s] Jan 7 13:55:48.310: INFO: Created: latency-svc-6rz2t Jan 7 13:55:48.373: INFO: Got endpoints: latency-svc-6rz2t [1.881833648s] Jan 7 13:55:48.397: INFO: Created: latency-svc-wrb89 Jan 7 13:55:48.400: INFO: Got endpoints: latency-svc-wrb89 [1.753390561s] Jan 7 13:55:48.536: INFO: Created: latency-svc-gjnld Jan 7 13:55:48.543: INFO: Got endpoints: latency-svc-gjnld [1.726751282s] Jan 7 13:55:48.657: INFO: Created: latency-svc-6qd5x Jan 7 13:55:48.685: INFO: Got endpoints: latency-svc-6qd5x [1.772666234s] Jan 7 13:55:48.723: INFO: Created: latency-svc-4tdfz Jan 7 13:55:48.810: INFO: Got endpoints: latency-svc-4tdfz [1.83717284s] Jan 7 13:55:48.818: INFO: Created: latency-svc-t9lvk Jan 7 13:55:48.819: INFO: Got endpoints: latency-svc-t9lvk [1.679815017s] Jan 7 13:55:48.878: INFO: Created: latency-svc-59fvx Jan 7 13:55:48.890: INFO: Got endpoints: latency-svc-59fvx [1.740527166s] Jan 7 13:55:48.987: INFO: Created: latency-svc-tghz2 Jan 7 13:55:49.074: INFO: Created: latency-svc-2lvrf Jan 7 13:55:49.152: INFO: Got endpoints: latency-svc-tghz2 [1.921535725s] Jan 7 13:55:49.164: INFO: Got endpoints: latency-svc-2lvrf [272.887306ms] Jan 7 13:55:49.219: INFO: Created: latency-svc-p5ftt Jan 7 13:55:49.221: INFO: Got endpoints: latency-svc-p5ftt [1.837164236s] Jan 7 13:55:49.304: INFO: Created: latency-svc-z9wh4 Jan 7 13:55:49.324: INFO: Got endpoints: latency-svc-z9wh4 [1.803768983s] Jan 7 13:55:49.383: INFO: Created: latency-svc-f6tng Jan 7 13:55:49.385: INFO: Got endpoints: latency-svc-f6tng [1.748479687s] Jan 7 13:55:49.486: INFO: Created: latency-svc-4n8cw Jan 7 13:55:49.495: INFO: Got endpoints: latency-svc-4n8cw [1.631936696s] Jan 7 13:55:49.577: INFO: Created: latency-svc-2rnjm Jan 7 13:55:49.577: INFO: Got endpoints: latency-svc-2rnjm [1.685689151s] Jan 7 13:55:49.766: INFO: Created: latency-svc-l85r4 Jan 7 13:55:49.797: INFO: Got endpoints: latency-svc-l85r4 [1.744652632s] Jan 7 13:55:49.962: INFO: Created: latency-svc-xr5jv Jan 7 13:55:49.982: INFO: Got endpoints: latency-svc-xr5jv [1.871019599s] Jan 7 13:55:50.048: INFO: Created: latency-svc-65jc8 Jan 7 13:55:50.118: INFO: Got endpoints: latency-svc-65jc8 [1.744192866s] Jan 7 13:55:50.176: INFO: Created: latency-svc-cbpkk Jan 7 13:55:50.197: INFO: Got endpoints: latency-svc-cbpkk [1.796485846s] Jan 7 13:55:50.384: INFO: Created: latency-svc-wbj47 Jan 7 13:55:50.405: INFO: Got endpoints: latency-svc-wbj47 [1.86151256s] Jan 7 13:55:50.442: INFO: Created: latency-svc-nmn2t Jan 7 13:55:50.466: INFO: Got endpoints: latency-svc-nmn2t [1.780731753s] Jan 7 13:55:50.610: INFO: Created: latency-svc-6cbfn Jan 7 13:55:50.620: INFO: Got endpoints: latency-svc-6cbfn [1.809528018s] Jan 7 13:55:50.665: INFO: Created: latency-svc-rl677 Jan 7 13:55:50.704: INFO: Got endpoints: latency-svc-rl677 [1.884721169s] Jan 7 13:55:50.828: INFO: Created: latency-svc-jw88b Jan 7 13:55:50.847: INFO: Got endpoints: latency-svc-jw88b [1.694021897s] Jan 7 13:55:50.892: INFO: Created: latency-svc-ljsfp Jan 7 13:55:50.907: INFO: Got endpoints: latency-svc-ljsfp [1.742401968s] Jan 7 13:55:50.985: INFO: Created: latency-svc-l6p49 Jan 7 13:55:51.005: INFO: Got endpoints: latency-svc-l6p49 [1.783422561s] Jan 7 13:55:51.065: INFO: Created: latency-svc-chtvb Jan 7 13:55:51.083: INFO: Got endpoints: latency-svc-chtvb [1.758958735s] Jan 7 13:55:51.229: INFO: Created: latency-svc-57h4x Jan 7 13:55:51.244: INFO: Got endpoints: latency-svc-57h4x [1.858670227s] Jan 7 13:55:51.292: INFO: Created: latency-svc-9jcz7 Jan 7 13:55:51.292: INFO: Got endpoints: latency-svc-9jcz7 [1.797361831s] Jan 7 13:55:51.407: INFO: Created: latency-svc-wl2mp Jan 7 13:55:51.426: INFO: Got endpoints: latency-svc-wl2mp [1.848317886s] Jan 7 13:55:51.453: INFO: Created: latency-svc-j9cg4 Jan 7 13:55:51.468: INFO: Got endpoints: latency-svc-j9cg4 [1.669732916s] Jan 7 13:55:51.537: INFO: Created: latency-svc-ldfz6 Jan 7 13:55:51.561: INFO: Got endpoints: latency-svc-ldfz6 [1.578081399s] Jan 7 13:55:51.606: INFO: Created: latency-svc-rqzwn Jan 7 13:55:51.700: INFO: Got endpoints: latency-svc-rqzwn [1.581173265s] Jan 7 13:55:51.716: INFO: Created: latency-svc-v82qd Jan 7 13:55:51.718: INFO: Got endpoints: latency-svc-v82qd [1.520023948s] Jan 7 13:55:51.792: INFO: Created: latency-svc-6t42j Jan 7 13:55:51.874: INFO: Got endpoints: latency-svc-6t42j [1.468715558s] Jan 7 13:55:51.874: INFO: Created: latency-svc-fjn9q Jan 7 13:55:51.902: INFO: Got endpoints: latency-svc-fjn9q [1.434983232s] Jan 7 13:55:52.057: INFO: Created: latency-svc-k9ksp Jan 7 13:55:52.097: INFO: Created: latency-svc-zvvnm Jan 7 13:55:52.097: INFO: Got endpoints: latency-svc-k9ksp [1.475978004s] Jan 7 13:55:52.116: INFO: Got endpoints: latency-svc-zvvnm [1.41174992s] Jan 7 13:55:52.158: INFO: Created: latency-svc-dds45 Jan 7 13:55:52.224: INFO: Got endpoints: latency-svc-dds45 [1.376498537s] Jan 7 13:55:52.252: INFO: Created: latency-svc-rdpb8 Jan 7 13:55:52.276: INFO: Got endpoints: latency-svc-rdpb8 [1.369105417s] Jan 7 13:55:52.459: INFO: Created: latency-svc-bw6th Jan 7 13:55:52.501: INFO: Got endpoints: latency-svc-bw6th [1.495494735s] Jan 7 13:55:52.504: INFO: Created: latency-svc-xlwrg Jan 7 13:55:52.525: INFO: Got endpoints: latency-svc-xlwrg [1.441072369s] Jan 7 13:55:52.633: INFO: Created: latency-svc-drgwm Jan 7 13:55:52.641: INFO: Got endpoints: latency-svc-drgwm [1.397002041s] Jan 7 13:55:52.690: INFO: Created: latency-svc-68664 Jan 7 13:55:52.702: INFO: Got endpoints: latency-svc-68664 [1.408888819s] Jan 7 13:55:52.798: INFO: Created: latency-svc-tnrn8 Jan 7 13:55:52.830: INFO: Got endpoints: latency-svc-tnrn8 [1.403420815s] Jan 7 13:55:52.869: INFO: Created: latency-svc-2q6ns Jan 7 13:55:52.913: INFO: Got endpoints: latency-svc-2q6ns [1.44492682s] Jan 7 13:55:52.943: INFO: Created: latency-svc-jfgxx Jan 7 13:55:52.947: INFO: Got endpoints: latency-svc-jfgxx [1.386104055s] Jan 7 13:55:52.974: INFO: Created: latency-svc-xdwdz Jan 7 13:55:52.991: INFO: Got endpoints: latency-svc-xdwdz [1.291445812s] Jan 7 13:55:53.135: INFO: Created: latency-svc-bt79t Jan 7 13:55:53.146: INFO: Got endpoints: latency-svc-bt79t [1.427648273s] Jan 7 13:55:53.200: INFO: Created: latency-svc-pjrbp Jan 7 13:55:53.213: INFO: Got endpoints: latency-svc-pjrbp [1.338093739s] Jan 7 13:55:53.316: INFO: Created: latency-svc-fdvdx Jan 7 13:55:53.321: INFO: Got endpoints: latency-svc-fdvdx [1.418654148s] Jan 7 13:55:53.376: INFO: Created: latency-svc-p8qwd Jan 7 13:55:53.402: INFO: Got endpoints: latency-svc-p8qwd [1.305295542s] Jan 7 13:55:53.538: INFO: Created: latency-svc-524hv Jan 7 13:55:53.556: INFO: Got endpoints: latency-svc-524hv [1.440059791s] Jan 7 13:55:53.601: INFO: Created: latency-svc-vr8rj Jan 7 13:55:53.621: INFO: Got endpoints: latency-svc-vr8rj [1.39621022s] Jan 7 13:55:53.766: INFO: Created: latency-svc-dcwtx Jan 7 13:55:53.789: INFO: Got endpoints: latency-svc-dcwtx [1.51188114s] Jan 7 13:55:53.852: INFO: Created: latency-svc-kxm8v Jan 7 13:55:53.948: INFO: Got endpoints: latency-svc-kxm8v [1.446389792s] Jan 7 13:55:53.959: INFO: Created: latency-svc-ksngf Jan 7 13:55:53.968: INFO: Got endpoints: latency-svc-ksngf [1.441213204s] Jan 7 13:55:54.038: INFO: Created: latency-svc-w9kb9 Jan 7 13:55:54.130: INFO: Got endpoints: latency-svc-w9kb9 [1.488605851s] Jan 7 13:55:54.155: INFO: Created: latency-svc-k9hwl Jan 7 13:55:54.158: INFO: Got endpoints: latency-svc-k9hwl [1.455656577s] Jan 7 13:55:54.227: INFO: Created: latency-svc-qxcgd Jan 7 13:55:54.330: INFO: Got endpoints: latency-svc-qxcgd [1.499947551s] Jan 7 13:55:54.346: INFO: Created: latency-svc-4r5ls Jan 7 13:55:54.347: INFO: Got endpoints: latency-svc-4r5ls [1.43368799s] Jan 7 13:55:54.398: INFO: Created: latency-svc-vcmf9 Jan 7 13:55:54.412: INFO: Got endpoints: latency-svc-vcmf9 [1.464536582s] Jan 7 13:55:54.577: INFO: Created: latency-svc-nvcnl Jan 7 13:55:54.597: INFO: Got endpoints: latency-svc-nvcnl [1.604955684s] Jan 7 13:55:54.643: INFO: Created: latency-svc-vn44n Jan 7 13:55:54.809: INFO: Created: latency-svc-mtpnt Jan 7 13:55:54.813: INFO: Got endpoints: latency-svc-vn44n [1.666648376s] Jan 7 13:55:54.829: INFO: Got endpoints: latency-svc-mtpnt [1.616095365s] Jan 7 13:55:54.880: INFO: Created: latency-svc-59mkz Jan 7 13:55:54.884: INFO: Got endpoints: latency-svc-59mkz [1.562545966s] Jan 7 13:55:54.998: INFO: Created: latency-svc-nd8c9 Jan 7 13:55:55.006: INFO: Got endpoints: latency-svc-nd8c9 [1.603622951s] Jan 7 13:55:55.197: INFO: Created: latency-svc-wfktq Jan 7 13:55:55.250: INFO: Created: latency-svc-d5fv8 Jan 7 13:55:55.262: INFO: Got endpoints: latency-svc-d5fv8 [1.63849093s] Jan 7 13:55:55.262: INFO: Got endpoints: latency-svc-wfktq [1.705096062s] Jan 7 13:55:55.379: INFO: Created: latency-svc-zgtms Jan 7 13:55:55.388: INFO: Got endpoints: latency-svc-zgtms [1.599280563s] Jan 7 13:55:55.445: INFO: Created: latency-svc-8vz6v Jan 7 13:55:55.446: INFO: Got endpoints: latency-svc-8vz6v [1.497270986s] Jan 7 13:55:55.573: INFO: Created: latency-svc-866cs Jan 7 13:55:55.579: INFO: Got endpoints: latency-svc-866cs [1.611234539s] Jan 7 13:55:55.646: INFO: Created: latency-svc-lzdwb Jan 7 13:55:55.741: INFO: Got endpoints: latency-svc-lzdwb [1.609631798s] Jan 7 13:55:55.750: INFO: Created: latency-svc-dpj4z Jan 7 13:55:55.755: INFO: Got endpoints: latency-svc-dpj4z [1.596852139s] Jan 7 13:55:55.794: INFO: Created: latency-svc-j9w74 Jan 7 13:55:55.820: INFO: Got endpoints: latency-svc-j9w74 [1.489689536s] Jan 7 13:55:55.933: INFO: Created: latency-svc-gpmpb Jan 7 13:55:55.946: INFO: Got endpoints: latency-svc-gpmpb [1.599251861s] Jan 7 13:55:56.035: INFO: Created: latency-svc-vp2fx Jan 7 13:55:56.189: INFO: Got endpoints: latency-svc-vp2fx [1.776084009s] Jan 7 13:55:56.207: INFO: Created: latency-svc-6z6nt Jan 7 13:55:56.220: INFO: Got endpoints: latency-svc-6z6nt [1.623065564s] Jan 7 13:55:56.268: INFO: Created: latency-svc-t4wkj Jan 7 13:55:56.279: INFO: Got endpoints: latency-svc-t4wkj [1.466091279s] Jan 7 13:55:56.395: INFO: Created: latency-svc-r45g4 Jan 7 13:55:56.403: INFO: Got endpoints: latency-svc-r45g4 [1.573572596s] Jan 7 13:55:56.467: INFO: Created: latency-svc-ngw7f Jan 7 13:55:56.712: INFO: Got endpoints: latency-svc-ngw7f [1.827595467s] Jan 7 13:55:56.754: INFO: Created: latency-svc-tddlf Jan 7 13:55:56.809: INFO: Got endpoints: latency-svc-tddlf [1.801889237s] Jan 7 13:55:56.940: INFO: Created: latency-svc-dx2d4 Jan 7 13:55:56.954: INFO: Got endpoints: latency-svc-dx2d4 [1.692404716s] Jan 7 13:55:57.019: INFO: Created: latency-svc-qtzss Jan 7 13:55:57.030: INFO: Got endpoints: latency-svc-qtzss [1.768175226s] Jan 7 13:55:57.228: INFO: Created: latency-svc-tt8mj Jan 7 13:55:57.250: INFO: Got endpoints: latency-svc-tt8mj [1.861949542s] Jan 7 13:55:57.308: INFO: Created: latency-svc-xq6sl Jan 7 13:55:57.310: INFO: Got endpoints: latency-svc-xq6sl [1.86411703s] Jan 7 13:55:57.475: INFO: Created: latency-svc-jxzzk Jan 7 13:55:57.481: INFO: Got endpoints: latency-svc-jxzzk [1.901799843s] Jan 7 13:55:58.527: INFO: Created: latency-svc-4v7gb Jan 7 13:55:58.556: INFO: Got endpoints: latency-svc-4v7gb [2.814734171s] Jan 7 13:55:58.686: INFO: Created: latency-svc-r9z2z Jan 7 13:55:58.692: INFO: Got endpoints: latency-svc-r9z2z [2.937041394s] Jan 7 13:55:58.789: INFO: Created: latency-svc-b7wzn Jan 7 13:55:58.907: INFO: Got endpoints: latency-svc-b7wzn [3.085923547s] Jan 7 13:55:58.937: INFO: Created: latency-svc-h7xgw Jan 7 13:55:58.941: INFO: Got endpoints: latency-svc-h7xgw [2.99490685s] Jan 7 13:55:58.984: INFO: Created: latency-svc-gg4xn Jan 7 13:55:59.128: INFO: Got endpoints: latency-svc-gg4xn [2.938398284s] Jan 7 13:55:59.130: INFO: Created: latency-svc-m67kv Jan 7 13:55:59.154: INFO: Got endpoints: latency-svc-m67kv [2.933443858s] Jan 7 13:55:59.201: INFO: Created: latency-svc-fk4xv Jan 7 13:55:59.225: INFO: Got endpoints: latency-svc-fk4xv [2.94580073s] Jan 7 13:55:59.298: INFO: Created: latency-svc-25p7c Jan 7 13:55:59.318: INFO: Got endpoints: latency-svc-25p7c [2.914990109s] Jan 7 13:55:59.369: INFO: Created: latency-svc-49vc9 Jan 7 13:55:59.369: INFO: Got endpoints: latency-svc-49vc9 [2.656838692s] Jan 7 13:55:59.476: INFO: Created: latency-svc-7q589 Jan 7 13:55:59.480: INFO: Got endpoints: latency-svc-7q589 [2.67061737s] Jan 7 13:55:59.547: INFO: Created: latency-svc-4xm6f Jan 7 13:55:59.666: INFO: Got endpoints: latency-svc-4xm6f [2.711454464s] Jan 7 13:55:59.670: INFO: Created: latency-svc-m64bm Jan 7 13:55:59.678: INFO: Got endpoints: latency-svc-m64bm [2.648065158s] Jan 7 13:55:59.732: INFO: Created: latency-svc-dzwvm Jan 7 13:55:59.740: INFO: Got endpoints: latency-svc-dzwvm [2.489380119s] Jan 7 13:55:59.860: INFO: Created: latency-svc-lfj7s Jan 7 13:55:59.884: INFO: Got endpoints: latency-svc-lfj7s [2.57330095s] Jan 7 13:55:59.939: INFO: Created: latency-svc-cjj6r Jan 7 13:56:00.014: INFO: Got endpoints: latency-svc-cjj6r [2.532231871s] Jan 7 13:56:00.050: INFO: Created: latency-svc-p5tpq Jan 7 13:56:00.059: INFO: Got endpoints: latency-svc-p5tpq [1.501254089s] Jan 7 13:56:00.100: INFO: Created: latency-svc-g2gtq Jan 7 13:56:00.197: INFO: Got endpoints: latency-svc-g2gtq [1.504125886s] Jan 7 13:56:00.211: INFO: Created: latency-svc-7ttbt Jan 7 13:56:00.243: INFO: Got endpoints: latency-svc-7ttbt [1.33426176s] Jan 7 13:56:00.251: INFO: Created: latency-svc-gvkrl Jan 7 13:56:00.256: INFO: Got endpoints: latency-svc-gvkrl [1.313586107s] Jan 7 13:56:00.280: INFO: Created: latency-svc-ghntw Jan 7 13:56:00.294: INFO: Got endpoints: latency-svc-ghntw [1.165725552s] Jan 7 13:56:00.374: INFO: Created: latency-svc-vl9fl Jan 7 13:56:00.380: INFO: Got endpoints: latency-svc-vl9fl [1.226064607s] Jan 7 13:56:00.456: INFO: Created: latency-svc-kt49m Jan 7 13:56:00.529: INFO: Got endpoints: latency-svc-kt49m [1.303751468s] Jan 7 13:56:00.546: INFO: Created: latency-svc-v2t8g Jan 7 13:56:00.561: INFO: Got endpoints: latency-svc-v2t8g [1.24276325s] Jan 7 13:56:00.825: INFO: Created: latency-svc-wcbsg Jan 7 13:56:00.831: INFO: Got endpoints: latency-svc-wcbsg [1.462410324s] Jan 7 13:56:01.025: INFO: Created: latency-svc-b45hn Jan 7 13:56:01.073: INFO: Got endpoints: latency-svc-b45hn [1.592609087s] Jan 7 13:56:01.077: INFO: Created: latency-svc-88pvb Jan 7 13:56:01.083: INFO: Got endpoints: latency-svc-88pvb [1.415725742s] Jan 7 13:56:01.249: INFO: Created: latency-svc-qhtkb Jan 7 13:56:01.258: INFO: Got endpoints: latency-svc-qhtkb [1.579187012s] Jan 7 13:56:01.330: INFO: Created: latency-svc-q75fq Jan 7 13:56:01.339: INFO: Got endpoints: latency-svc-q75fq [1.598236303s] Jan 7 13:56:01.457: INFO: Created: latency-svc-zrtgs Jan 7 13:56:01.457: INFO: Got endpoints: latency-svc-zrtgs [1.572833892s] Jan 7 13:56:01.499: INFO: Created: latency-svc-p7rdc Jan 7 13:56:01.566: INFO: Got endpoints: latency-svc-p7rdc [1.551244536s] Jan 7 13:56:01.613: INFO: Created: latency-svc-dmjvn Jan 7 13:56:01.614: INFO: Got endpoints: latency-svc-dmjvn [1.554872576s] Jan 7 13:56:01.653: INFO: Created: latency-svc-h88jj Jan 7 13:56:01.742: INFO: Got endpoints: latency-svc-h88jj [1.544797319s] Jan 7 13:56:01.748: INFO: Created: latency-svc-jjs4z Jan 7 13:56:01.800: INFO: Got endpoints: latency-svc-jjs4z [1.556556619s] Jan 7 13:56:01.834: INFO: Created: latency-svc-dbslw Jan 7 13:56:01.916: INFO: Got endpoints: latency-svc-dbslw [1.660118331s] Jan 7 13:56:01.932: INFO: Created: latency-svc-9pgd4 Jan 7 13:56:01.950: INFO: Got endpoints: latency-svc-9pgd4 [1.655492891s] Jan 7 13:56:02.088: INFO: Created: latency-svc-jz2z6 Jan 7 13:56:02.095: INFO: Got endpoints: latency-svc-jz2z6 [1.714529985s] Jan 7 13:56:02.148: INFO: Created: latency-svc-m7bfx Jan 7 13:56:02.186: INFO: Got endpoints: latency-svc-m7bfx [1.655604715s] Jan 7 13:56:02.276: INFO: Created: latency-svc-vxqd5 Jan 7 13:56:02.318: INFO: Got endpoints: latency-svc-vxqd5 [1.755608252s] Jan 7 13:56:02.320: INFO: Created: latency-svc-sz965 Jan 7 13:56:02.336: INFO: Got endpoints: latency-svc-sz965 [1.504199631s] Jan 7 13:56:02.430: INFO: Created: latency-svc-nx2qm Jan 7 13:56:02.436: INFO: Got endpoints: latency-svc-nx2qm [1.362639534s] Jan 7 13:56:02.494: INFO: Created: latency-svc-p6vvm Jan 7 13:56:02.506: INFO: Got endpoints: latency-svc-p6vvm [1.422457842s] Jan 7 13:56:02.616: INFO: Created: latency-svc-qcv2q Jan 7 13:56:02.673: INFO: Got endpoints: latency-svc-qcv2q [1.414995009s] Jan 7 13:56:02.690: INFO: Created: latency-svc-q26b5 Jan 7 13:56:02.782: INFO: Got endpoints: latency-svc-q26b5 [1.442634142s] Jan 7 13:56:02.820: INFO: Created: latency-svc-krz5p Jan 7 13:56:02.872: INFO: Got endpoints: latency-svc-krz5p [1.414903384s] Jan 7 13:56:02.897: INFO: Created: latency-svc-qml6d Jan 7 13:56:02.988: INFO: Got endpoints: latency-svc-qml6d [1.421577101s] Jan 7 13:56:02.995: INFO: Created: latency-svc-wc972 Jan 7 13:56:03.041: INFO: Got endpoints: latency-svc-wc972 [1.426890982s] Jan 7 13:56:03.179: INFO: Created: latency-svc-2nbdh Jan 7 13:56:03.183: INFO: Got endpoints: latency-svc-2nbdh [1.440663234s] Jan 7 13:56:03.222: INFO: Created: latency-svc-9bq66 Jan 7 13:56:03.225: INFO: Got endpoints: latency-svc-9bq66 [1.424524658s] Jan 7 13:56:03.273: INFO: Created: latency-svc-td2vb Jan 7 13:56:03.337: INFO: Got endpoints: latency-svc-td2vb [1.420339113s] Jan 7 13:56:03.389: INFO: Created: latency-svc-fz7p4 Jan 7 13:56:03.397: INFO: Got endpoints: latency-svc-fz7p4 [1.446357767s] Jan 7 13:56:03.429: INFO: Created: latency-svc-dkshs Jan 7 13:56:03.484: INFO: Got endpoints: latency-svc-dkshs [1.387945287s] Jan 7 13:56:03.523: INFO: Created: latency-svc-pmtd4 Jan 7 13:56:03.534: INFO: Got endpoints: latency-svc-pmtd4 [1.348229914s] Jan 7 13:56:03.577: INFO: Created: latency-svc-gl9dg Jan 7 13:56:03.583: INFO: Got endpoints: latency-svc-gl9dg [1.264733344s] Jan 7 13:56:03.690: INFO: Created: latency-svc-xws7b Jan 7 13:56:03.696: INFO: Got endpoints: latency-svc-xws7b [1.359729186s] Jan 7 13:56:03.769: INFO: Created: latency-svc-8t5cb Jan 7 13:56:03.901: INFO: Got endpoints: latency-svc-8t5cb [1.464300011s] Jan 7 13:56:03.943: INFO: Created: latency-svc-k8s2f Jan 7 13:56:03.970: INFO: Created: latency-svc-xh64n Jan 7 13:56:03.971: INFO: Got endpoints: latency-svc-k8s2f [1.464184343s] Jan 7 13:56:03.977: INFO: Got endpoints: latency-svc-xh64n [1.30266943s] Jan 7 13:56:04.110: INFO: Created: latency-svc-mddkd Jan 7 13:56:04.120: INFO: Got endpoints: latency-svc-mddkd [1.336708153s] Jan 7 13:56:04.160: INFO: Created: latency-svc-8p5n4 Jan 7 13:56:04.197: INFO: Got endpoints: latency-svc-8p5n4 [1.323572816s] Jan 7 13:56:04.299: INFO: Created: latency-svc-6ntlv Jan 7 13:56:04.325: INFO: Got endpoints: latency-svc-6ntlv [1.336108527s] Jan 7 13:56:04.332: INFO: Created: latency-svc-sdj2n Jan 7 13:56:04.341: INFO: Got endpoints: latency-svc-sdj2n [1.300201279s] Jan 7 13:56:04.382: INFO: Created: latency-svc-n62hq Jan 7 13:56:04.435: INFO: Got endpoints: latency-svc-n62hq [1.251605787s] Jan 7 13:56:04.476: INFO: Created: latency-svc-f6g87 Jan 7 13:56:04.484: INFO: Got endpoints: latency-svc-f6g87 [1.258935504s] Jan 7 13:56:04.526: INFO: Created: latency-svc-jzgth Jan 7 13:56:04.611: INFO: Got endpoints: latency-svc-jzgth [1.272946371s] Jan 7 13:56:04.615: INFO: Created: latency-svc-6xkb9 Jan 7 13:56:04.633: INFO: Got endpoints: latency-svc-6xkb9 [1.236286911s] Jan 7 13:56:04.670: INFO: Created: latency-svc-9bzpf Jan 7 13:56:04.677: INFO: Got endpoints: latency-svc-9bzpf [1.193419792s] Jan 7 13:56:04.763: INFO: Created: latency-svc-zv24h Jan 7 13:56:04.780: INFO: Got endpoints: latency-svc-zv24h [1.244897151s] Jan 7 13:56:04.830: INFO: Created: latency-svc-v4289 Jan 7 13:56:04.840: INFO: Got endpoints: latency-svc-v4289 [1.256557634s] Jan 7 13:56:04.975: INFO: Created: latency-svc-sg6kt Jan 7 13:56:04.979: INFO: Got endpoints: latency-svc-sg6kt [1.282967673s] Jan 7 13:56:05.161: INFO: Created: latency-svc-hp6dg Jan 7 13:56:05.184: INFO: Got endpoints: latency-svc-hp6dg [1.28207605s] Jan 7 13:56:05.235: INFO: Created: latency-svc-k56bc Jan 7 13:56:05.242: INFO: Got endpoints: latency-svc-k56bc [1.271200367s] Jan 7 13:56:05.329: INFO: Created: latency-svc-b6j6l Jan 7 13:56:05.357: INFO: Got endpoints: latency-svc-b6j6l [1.37992107s] Jan 7 13:56:05.411: INFO: Created: latency-svc-2qkqq Jan 7 13:56:05.478: INFO: Got endpoints: latency-svc-2qkqq [1.357620492s] Jan 7 13:56:05.480: INFO: Created: latency-svc-cl9n2 Jan 7 13:56:05.485: INFO: Got endpoints: latency-svc-cl9n2 [1.287206205s] Jan 7 13:56:05.485: INFO: Latencies: [222.391289ms 272.887306ms 325.100065ms 360.446077ms 488.465085ms 521.821429ms 576.884012ms 699.392174ms 833.63639ms 863.381567ms 925.01985ms 1.024938003s 1.08807519s 1.165725552s 1.193419792s 1.220734526s 1.226064607s 1.236286911s 1.241801637s 1.24276325s 1.244897151s 1.24721035s 1.251605787s 1.256557634s 1.258935504s 1.264733344s 1.271200367s 1.272946371s 1.28207605s 1.282967673s 1.287206205s 1.291445812s 1.295142545s 1.300201279s 1.30266943s 1.303751468s 1.305295542s 1.313586107s 1.323572816s 1.33426176s 1.336108527s 1.336708153s 1.338093739s 1.348229914s 1.357620492s 1.359729186s 1.362639534s 1.369105417s 1.376498537s 1.37992107s 1.382575688s 1.386104055s 1.387945287s 1.391141132s 1.39621022s 1.397002041s 1.399112213s 1.403420815s 1.408888819s 1.41174992s 1.414903384s 1.414995009s 1.415725742s 1.418654148s 1.420339113s 1.421577101s 1.422457842s 1.424524658s 1.426890982s 1.427648273s 1.43368799s 1.433737574s 1.434983232s 1.440059791s 1.440663234s 1.441072369s 1.441213204s 1.442634142s 1.44492682s 1.446357767s 1.446389792s 1.450756258s 1.455656577s 1.462410324s 1.464184343s 1.464300011s 1.464536582s 1.466091279s 1.468715558s 1.475978004s 1.481266975s 1.488605851s 1.489689536s 1.495494735s 1.497270986s 1.499947551s 1.501254089s 1.504125886s 1.504199631s 1.506490018s 1.51188114s 1.511934129s 1.520023948s 1.544797319s 1.551244536s 1.554872576s 1.556556619s 1.562545966s 1.56906171s 1.572833892s 1.573572596s 1.578081399s 1.579187012s 1.580512869s 1.581173265s 1.592609087s 1.596852139s 1.597683065s 1.598236303s 1.599251861s 1.599280563s 1.603622951s 1.604955684s 1.609631798s 1.611234539s 1.616095365s 1.623065564s 1.623692995s 1.631936696s 1.636665666s 1.63849093s 1.655492891s 1.655604715s 1.660118331s 1.666648376s 1.669732916s 1.679314461s 1.679815017s 1.685689151s 1.689998032s 1.692404716s 1.694021897s 1.705096062s 1.707028379s 1.712888306s 1.714529985s 1.722788383s 1.726751282s 1.737449774s 1.739048318s 1.740527166s 1.742401968s 1.744192866s 1.744652632s 1.748479687s 1.749703945s 1.753390561s 1.755608252s 1.758958735s 1.768175226s 1.772666234s 1.776084009s 1.780731753s 1.783422561s 1.796485846s 1.797361831s 1.801889237s 1.803768983s 1.809528018s 1.827595467s 1.828015706s 1.837164236s 1.83717284s 1.839772043s 1.848317886s 1.858670227s 1.86151256s 1.861949542s 1.86411703s 1.871019599s 1.881833648s 1.884721169s 1.895894904s 1.901799843s 1.921535725s 2.489380119s 2.532231871s 2.57330095s 2.648065158s 2.656838692s 2.67061737s 2.711454464s 2.814734171s 2.914990109s 2.933443858s 2.937041394s 2.938398284s 2.94580073s 2.99490685s 3.085923547s] Jan 7 13:56:05.485: INFO: 50 %ile: 1.51188114s Jan 7 13:56:05.485: INFO: 90 %ile: 1.881833648s Jan 7 13:56:05.485: INFO: 99 %ile: 2.99490685s Jan 7 13:56:05.486: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:56:05.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9744" for this suite. Jan 7 13:56:47.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:56:47.743: INFO: namespace svc-latency-9744 deletion completed in 42.246980629s • [SLOW TEST:73.304 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:56:47.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 7 13:56:47.955: INFO: Waiting up to 5m0s for pod "pod-89d114b9-9f0c-4de0-8ddf-1b569bd04a50" in namespace "emptydir-7926" to be "success or failure" Jan 7 13:56:47.963: INFO: Pod "pod-89d114b9-9f0c-4de0-8ddf-1b569bd04a50": Phase="Pending", Reason="", readiness=false. Elapsed: 7.120817ms Jan 7 13:56:49.978: INFO: Pod "pod-89d114b9-9f0c-4de0-8ddf-1b569bd04a50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022369432s Jan 7 13:56:52.009: INFO: Pod "pod-89d114b9-9f0c-4de0-8ddf-1b569bd04a50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053983499s Jan 7 13:56:54.030: INFO: Pod "pod-89d114b9-9f0c-4de0-8ddf-1b569bd04a50": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074280618s Jan 7 13:56:56.038: INFO: Pod "pod-89d114b9-9f0c-4de0-8ddf-1b569bd04a50": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082700936s Jan 7 13:56:58.048: INFO: Pod "pod-89d114b9-9f0c-4de0-8ddf-1b569bd04a50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.092961118s STEP: Saw pod success Jan 7 13:56:58.049: INFO: Pod "pod-89d114b9-9f0c-4de0-8ddf-1b569bd04a50" satisfied condition "success or failure" Jan 7 13:56:58.055: INFO: Trying to get logs from node iruya-node pod pod-89d114b9-9f0c-4de0-8ddf-1b569bd04a50 container test-container: STEP: delete the pod Jan 7 13:56:58.174: INFO: Waiting for pod pod-89d114b9-9f0c-4de0-8ddf-1b569bd04a50 to disappear Jan 7 13:56:58.203: INFO: Pod pod-89d114b9-9f0c-4de0-8ddf-1b569bd04a50 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:56:58.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7926" for this suite. Jan 7 13:57:04.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:57:04.401: INFO: namespace emptydir-7926 deletion completed in 6.186175723s • [SLOW TEST:16.656 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:57:04.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 7 13:57:04.532: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e2f00c8-861c-4142-bd26-57918e96c4b5" in namespace "projected-8807" to be "success or failure" Jan 7 13:57:04.539: INFO: Pod "downwardapi-volume-8e2f00c8-861c-4142-bd26-57918e96c4b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168008ms Jan 7 13:57:06.563: INFO: Pod "downwardapi-volume-8e2f00c8-861c-4142-bd26-57918e96c4b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029661589s Jan 7 13:57:08.575: INFO: Pod "downwardapi-volume-8e2f00c8-861c-4142-bd26-57918e96c4b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0420662s Jan 7 13:57:10.632: INFO: Pod "downwardapi-volume-8e2f00c8-861c-4142-bd26-57918e96c4b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099068426s Jan 7 13:57:12.638: INFO: Pod "downwardapi-volume-8e2f00c8-861c-4142-bd26-57918e96c4b5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105247131s Jan 7 13:57:14.681: INFO: Pod "downwardapi-volume-8e2f00c8-861c-4142-bd26-57918e96c4b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.147624921s STEP: Saw pod success Jan 7 13:57:14.681: INFO: Pod "downwardapi-volume-8e2f00c8-861c-4142-bd26-57918e96c4b5" satisfied condition "success or failure" Jan 7 13:57:14.689: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8e2f00c8-861c-4142-bd26-57918e96c4b5 container client-container: STEP: delete the pod Jan 7 13:57:14.844: INFO: Waiting for pod downwardapi-volume-8e2f00c8-861c-4142-bd26-57918e96c4b5 to disappear Jan 7 13:57:14.850: INFO: Pod downwardapi-volume-8e2f00c8-861c-4142-bd26-57918e96c4b5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:57:14.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8807" for this suite. Jan 7 13:57:20.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:57:21.036: INFO: namespace projected-8807 deletion completed in 6.178920387s • [SLOW TEST:16.634 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:57:21.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 7 13:57:21.237: INFO: Waiting up to 5m0s for pod "downwardapi-volume-09080ccc-9b44-4fc0-9b8a-4b67edb6622f" in namespace "downward-api-6570" to be "success or failure" Jan 7 13:57:21.245: INFO: Pod "downwardapi-volume-09080ccc-9b44-4fc0-9b8a-4b67edb6622f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.690999ms Jan 7 13:57:23.260: INFO: Pod "downwardapi-volume-09080ccc-9b44-4fc0-9b8a-4b67edb6622f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022296306s Jan 7 13:57:25.274: INFO: Pod "downwardapi-volume-09080ccc-9b44-4fc0-9b8a-4b67edb6622f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03623232s Jan 7 13:57:27.287: INFO: Pod "downwardapi-volume-09080ccc-9b44-4fc0-9b8a-4b67edb6622f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049823285s Jan 7 13:57:29.303: INFO: Pod "downwardapi-volume-09080ccc-9b44-4fc0-9b8a-4b67edb6622f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06557686s STEP: Saw pod success Jan 7 13:57:29.304: INFO: Pod "downwardapi-volume-09080ccc-9b44-4fc0-9b8a-4b67edb6622f" satisfied condition "success or failure" Jan 7 13:57:29.315: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-09080ccc-9b44-4fc0-9b8a-4b67edb6622f container client-container: STEP: delete the pod Jan 7 13:57:29.426: INFO: Waiting for pod downwardapi-volume-09080ccc-9b44-4fc0-9b8a-4b67edb6622f to disappear Jan 7 13:57:29.437: INFO: Pod downwardapi-volume-09080ccc-9b44-4fc0-9b8a-4b67edb6622f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:57:29.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6570" for this suite. Jan 7 13:57:35.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:57:35.607: INFO: namespace downward-api-6570 deletion completed in 6.163161055s • [SLOW TEST:14.571 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:57:35.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-680a4aa4-5ff1-4149-8257-b34b07cc137a STEP: Creating a pod to test consume secrets Jan 7 13:57:35.764: INFO: Waiting up to 5m0s for pod "pod-secrets-01376fa9-16b0-4138-abff-1648b76f1d60" in namespace "secrets-9456" to be "success or failure" Jan 7 13:57:35.843: INFO: Pod "pod-secrets-01376fa9-16b0-4138-abff-1648b76f1d60": Phase="Pending", Reason="", readiness=false. Elapsed: 79.149689ms Jan 7 13:57:37.865: INFO: Pod "pod-secrets-01376fa9-16b0-4138-abff-1648b76f1d60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101302758s Jan 7 13:57:39.875: INFO: Pod "pod-secrets-01376fa9-16b0-4138-abff-1648b76f1d60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110531832s Jan 7 13:57:41.895: INFO: Pod "pod-secrets-01376fa9-16b0-4138-abff-1648b76f1d60": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131160814s Jan 7 13:57:43.902: INFO: Pod "pod-secrets-01376fa9-16b0-4138-abff-1648b76f1d60": Phase="Pending", Reason="", readiness=false. Elapsed: 8.138426815s Jan 7 13:57:45.912: INFO: Pod "pod-secrets-01376fa9-16b0-4138-abff-1648b76f1d60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.148058235s STEP: Saw pod success Jan 7 13:57:45.912: INFO: Pod "pod-secrets-01376fa9-16b0-4138-abff-1648b76f1d60" satisfied condition "success or failure" Jan 7 13:57:45.918: INFO: Trying to get logs from node iruya-node pod pod-secrets-01376fa9-16b0-4138-abff-1648b76f1d60 container secret-volume-test: STEP: delete the pod Jan 7 13:57:46.015: INFO: Waiting for pod pod-secrets-01376fa9-16b0-4138-abff-1648b76f1d60 to disappear Jan 7 13:57:46.063: INFO: Pod pod-secrets-01376fa9-16b0-4138-abff-1648b76f1d60 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 13:57:46.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9456" for this suite. Jan 7 13:57:52.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 13:57:52.262: INFO: namespace secrets-9456 deletion completed in 6.183375903s • [SLOW TEST:16.655 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 13:57:52.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6059 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-6059 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6059 Jan 7 13:57:52.418: INFO: Found 0 stateful pods, waiting for 1 Jan 7 13:58:02.428: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 7 13:58:02.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 7 13:58:03.245: INFO: stderr: "I0107 13:58:02.841980 1649 log.go:172] (0xc000a66420) (0xc000704820) Create stream\nI0107 13:58:02.842147 1649 log.go:172] (0xc000a66420) (0xc000704820) Stream added, broadcasting: 1\nI0107 13:58:02.860894 1649 log.go:172] (0xc000a66420) Reply frame received for 1\nI0107 13:58:02.861269 1649 log.go:172] (0xc000a66420) (0xc000680280) Create stream\nI0107 13:58:02.861342 1649 log.go:172] (0xc000a66420) (0xc000680280) Stream added, broadcasting: 3\nI0107 13:58:02.869074 1649 log.go:172] (0xc000a66420) Reply frame received for 3\nI0107 13:58:02.869133 1649 log.go:172] (0xc000a66420) (0xc000233ae0) Create stream\nI0107 13:58:02.869149 1649 log.go:172] (0xc000a66420) (0xc000233ae0) Stream added, broadcasting: 5\nI0107 13:58:02.871123 1649 log.go:172] (0xc000a66420) Reply frame received for 5\nI0107 13:58:03.018958 1649 log.go:172] (0xc000a66420) Data frame received for 5\nI0107 13:58:03.019040 1649 log.go:172] (0xc000233ae0) (5) Data frame handling\nI0107 13:58:03.019085 1649 log.go:172] (0xc000233ae0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0107 13:58:03.074852 1649 log.go:172] (0xc000a66420) Data frame received for 3\nI0107 13:58:03.074971 1649 log.go:172] (0xc000680280) (3) Data frame handling\nI0107 13:58:03.075021 1649 log.go:172] (0xc000680280) (3) Data frame sent\nI0107 13:58:03.227669 1649 log.go:172] (0xc000a66420) Data frame received for 1\nI0107 13:58:03.227827 1649 log.go:172] (0xc000a66420) (0xc000680280) Stream removed, broadcasting: 3\nI0107 13:58:03.227926 1649 log.go:172] (0xc000704820) (1) Data frame handling\nI0107 13:58:03.227975 1649 log.go:172] (0xc000704820) (1) Data frame sent\nI0107 13:58:03.228138 1649 log.go:172] (0xc000a66420) (0xc000233ae0) Stream removed, broadcasting: 5\nI0107 13:58:03.228473 1649 log.go:172] (0xc000a66420) (0xc000704820) Stream removed, broadcasting: 1\nI0107 13:58:03.228571 1649 log.go:172] (0xc000a66420) Go away received\nI0107 13:58:03.230489 1649 log.go:172] (0xc000a66420) (0xc000704820) Stream removed, broadcasting: 1\nI0107 13:58:03.230540 1649 log.go:172] (0xc000a66420) (0xc000680280) Stream removed, broadcasting: 3\nI0107 13:58:03.230611 1649 log.go:172] (0xc000a66420) (0xc000233ae0) Stream removed, broadcasting: 5\n" Jan 7 13:58:03.245: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 7 13:58:03.245: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 7 13:58:03.251: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 7 13:58:13.261: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 7 13:58:13.262: INFO: Waiting for statefulset status.replicas updated to 0 Jan 7 13:58:13.366: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 13:58:13.366: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:57:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:57:52 +0000 UTC }] Jan 7 13:58:13.367: INFO: Jan 7 13:58:13.367: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 7 13:58:14.494: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.930240958s Jan 7 13:58:15.806: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.803294676s Jan 7 13:58:16.816: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.490807424s Jan 7 13:58:17.833: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.481257468s Jan 7 13:58:19.053: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.46332357s Jan 7 13:58:20.641: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.243852384s Jan 7 13:58:21.652: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.655649987s Jan 7 13:58:22.666: INFO: Verifying statefulset ss doesn't scale past 3 for another 645.040479ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6059 Jan 7 13:58:23.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:58:24.493: INFO: stderr: "I0107 13:58:24.007569 1672 log.go:172] (0xc000a26420) (0xc0008eb220) Create stream\nI0107 13:58:24.007848 1672 log.go:172] (0xc000a26420) (0xc0008eb220) Stream added, broadcasting: 1\nI0107 13:58:24.030668 1672 log.go:172] (0xc000a26420) Reply frame received for 1\nI0107 13:58:24.030948 1672 log.go:172] (0xc000a26420) (0xc0008ea000) Create stream\nI0107 13:58:24.030982 1672 log.go:172] (0xc000a26420) (0xc0008ea000) Stream added, broadcasting: 3\nI0107 13:58:24.041947 1672 log.go:172] (0xc000a26420) Reply frame received for 3\nI0107 13:58:24.042275 1672 log.go:172] (0xc000a26420) (0xc0008f8000) Create stream\nI0107 13:58:24.042321 1672 log.go:172] (0xc000a26420) (0xc0008f8000) Stream added, broadcasting: 5\nI0107 13:58:24.046244 1672 log.go:172] (0xc000a26420) Reply frame received for 5\nI0107 13:58:24.261558 1672 log.go:172] (0xc000a26420) Data frame received for 5\nI0107 13:58:24.261731 1672 log.go:172] (0xc0008f8000) (5) Data frame handling\nI0107 13:58:24.261785 1672 log.go:172] (0xc0008f8000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0107 13:58:24.261830 1672 log.go:172] (0xc000a26420) Data frame received for 3\nI0107 13:58:24.261863 1672 log.go:172] (0xc0008ea000) (3) Data frame handling\nI0107 13:58:24.261887 1672 log.go:172] (0xc0008ea000) (3) Data frame sent\nI0107 13:58:24.477064 1672 log.go:172] (0xc000a26420) (0xc0008ea000) Stream removed, broadcasting: 3\nI0107 13:58:24.477337 1672 log.go:172] (0xc000a26420) Data frame received for 1\nI0107 13:58:24.477359 1672 log.go:172] (0xc0008eb220) (1) Data frame handling\nI0107 13:58:24.477387 1672 log.go:172] (0xc0008eb220) (1) Data frame sent\nI0107 13:58:24.477524 1672 log.go:172] (0xc000a26420) (0xc0008eb220) Stream removed, broadcasting: 1\nI0107 13:58:24.478578 1672 log.go:172] (0xc000a26420) (0xc0008f8000) Stream removed, broadcasting: 5\nI0107 13:58:24.478644 1672 log.go:172] (0xc000a26420) (0xc0008eb220) Stream removed, broadcasting: 1\nI0107 13:58:24.478716 1672 log.go:172] (0xc000a26420) (0xc0008ea000) Stream removed, broadcasting: 3\nI0107 13:58:24.478736 1672 log.go:172] (0xc000a26420) (0xc0008f8000) Stream removed, broadcasting: 5\nI0107 13:58:24.478780 1672 log.go:172] (0xc000a26420) Go away received\n" Jan 7 13:58:24.493: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 7 13:58:24.493: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 7 13:58:24.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:58:24.842: INFO: stderr: "I0107 13:58:24.673947 1691 log.go:172] (0xc000996420) (0xc0001fc820) Create stream\nI0107 13:58:24.674100 1691 log.go:172] (0xc000996420) (0xc0001fc820) Stream added, broadcasting: 1\nI0107 13:58:24.677221 1691 log.go:172] (0xc000996420) Reply frame received for 1\nI0107 13:58:24.677256 1691 log.go:172] (0xc000996420) (0xc0009bc000) Create stream\nI0107 13:58:24.677267 1691 log.go:172] (0xc000996420) (0xc0009bc000) Stream added, broadcasting: 3\nI0107 13:58:24.678231 1691 log.go:172] (0xc000996420) Reply frame received for 3\nI0107 13:58:24.678264 1691 log.go:172] (0xc000996420) (0xc0006301e0) Create stream\nI0107 13:58:24.678278 1691 log.go:172] (0xc000996420) (0xc0006301e0) Stream added, broadcasting: 5\nI0107 13:58:24.679284 1691 log.go:172] (0xc000996420) Reply frame received for 5\nI0107 13:58:24.763668 1691 log.go:172] (0xc000996420) Data frame received for 3\nI0107 13:58:24.763765 1691 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0107 13:58:24.763793 1691 log.go:172] (0xc0009bc000) (3) Data frame sent\nI0107 13:58:24.763826 1691 log.go:172] (0xc000996420) Data frame received for 5\nI0107 13:58:24.763846 1691 log.go:172] (0xc0006301e0) (5) Data frame handling\nI0107 13:58:24.763862 1691 log.go:172] (0xc0006301e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0107 13:58:24.836278 1691 log.go:172] (0xc000996420) Data frame received for 1\nI0107 13:58:24.836370 1691 log.go:172] (0xc000996420) (0xc0006301e0) Stream removed, broadcasting: 5\nI0107 13:58:24.836413 1691 log.go:172] (0xc0001fc820) (1) Data frame handling\nI0107 13:58:24.836428 1691 log.go:172] (0xc0001fc820) (1) Data frame sent\nI0107 13:58:24.836635 1691 log.go:172] (0xc000996420) (0xc0009bc000) Stream removed, broadcasting: 3\nI0107 13:58:24.836661 1691 log.go:172] (0xc000996420) (0xc0001fc820) Stream removed, broadcasting: 1\nI0107 13:58:24.836677 1691 log.go:172] (0xc000996420) Go away received\nI0107 13:58:24.837496 1691 log.go:172] (0xc000996420) (0xc0001fc820) Stream removed, broadcasting: 1\nI0107 13:58:24.837506 1691 log.go:172] (0xc000996420) (0xc0009bc000) Stream removed, broadcasting: 3\nI0107 13:58:24.837510 1691 log.go:172] (0xc000996420) (0xc0006301e0) Stream removed, broadcasting: 5\n" Jan 7 13:58:24.842: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 7 13:58:24.842: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 7 13:58:24.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:58:25.269: INFO: stderr: "I0107 13:58:25.025493 1710 log.go:172] (0xc0001166e0) (0xc0007b2640) Create stream\nI0107 13:58:25.025624 1710 log.go:172] (0xc0001166e0) (0xc0007b2640) Stream added, broadcasting: 1\nI0107 13:58:25.031038 1710 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0107 13:58:25.031119 1710 log.go:172] (0xc0001166e0) (0xc00088e000) Create stream\nI0107 13:58:25.031133 1710 log.go:172] (0xc0001166e0) (0xc00088e000) Stream added, broadcasting: 3\nI0107 13:58:25.032501 1710 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0107 13:58:25.032524 1710 log.go:172] (0xc0001166e0) (0xc0007b26e0) Create stream\nI0107 13:58:25.032532 1710 log.go:172] (0xc0001166e0) (0xc0007b26e0) Stream added, broadcasting: 5\nI0107 13:58:25.033893 1710 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0107 13:58:25.116273 1710 log.go:172] (0xc0001166e0) Data frame received for 5\nI0107 13:58:25.116365 1710 log.go:172] (0xc0007b26e0) (5) Data frame handling\nI0107 13:58:25.116384 1710 log.go:172] (0xc0007b26e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\nI0107 13:58:25.116448 1710 log.go:172] (0xc0001166e0) Data frame received for 3\nI0107 13:58:25.116461 1710 log.go:172] (0xc00088e000) (3) Data frame handling\nI0107 13:58:25.116471 1710 log.go:172] (0xc00088e000) (3) Data frame sent\nI0107 13:58:25.117832 1710 log.go:172] (0xc0001166e0) Data frame received for 5\nI0107 13:58:25.117842 1710 log.go:172] (0xc0007b26e0) (5) Data frame handling\nI0107 13:58:25.117850 1710 log.go:172] (0xc0007b26e0) (5) Data frame sent\n+ true\nI0107 13:58:25.255022 1710 log.go:172] (0xc0001166e0) Data frame received for 1\nI0107 13:58:25.255419 1710 log.go:172] (0xc0001166e0) (0xc00088e000) Stream removed, broadcasting: 3\nI0107 13:58:25.255484 1710 log.go:172] (0xc0007b2640) (1) Data frame handling\nI0107 13:58:25.255511 1710 log.go:172] (0xc0007b2640) (1) Data frame sent\nI0107 13:58:25.255566 1710 log.go:172] (0xc0001166e0) (0xc0007b26e0) Stream removed, broadcasting: 5\nI0107 13:58:25.255643 1710 log.go:172] (0xc0001166e0) (0xc0007b2640) Stream removed, broadcasting: 1\nI0107 13:58:25.255678 1710 log.go:172] (0xc0001166e0) Go away received\nI0107 13:58:25.257516 1710 log.go:172] (0xc0001166e0) (0xc0007b2640) Stream removed, broadcasting: 1\nI0107 13:58:25.257540 1710 log.go:172] (0xc0001166e0) (0xc00088e000) Stream removed, broadcasting: 3\nI0107 13:58:25.257556 1710 log.go:172] (0xc0001166e0) (0xc0007b26e0) Stream removed, broadcasting: 5\n" Jan 7 13:58:25.269: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 7 13:58:25.269: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 7 13:58:25.292: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 7 13:58:25.292: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 7 13:58:25.292: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 7 13:58:25.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 7 13:58:25.816: INFO: stderr: "I0107 13:58:25.496370 1729 log.go:172] (0xc00065e420) (0xc0005b2820) Create stream\nI0107 13:58:25.496502 1729 log.go:172] (0xc00065e420) (0xc0005b2820) Stream added, broadcasting: 1\nI0107 13:58:25.502078 1729 log.go:172] (0xc00065e420) Reply frame received for 1\nI0107 13:58:25.502129 1729 log.go:172] (0xc00065e420) (0xc0006761e0) Create stream\nI0107 13:58:25.502145 1729 log.go:172] (0xc00065e420) (0xc0006761e0) Stream added, broadcasting: 3\nI0107 13:58:25.503442 1729 log.go:172] (0xc00065e420) Reply frame received for 3\nI0107 13:58:25.503473 1729 log.go:172] (0xc00065e420) (0xc0005b28c0) Create stream\nI0107 13:58:25.503482 1729 log.go:172] (0xc00065e420) (0xc0005b28c0) Stream added, broadcasting: 5\nI0107 13:58:25.505463 1729 log.go:172] (0xc00065e420) Reply frame received for 5\nI0107 13:58:25.629635 1729 log.go:172] (0xc00065e420) Data frame received for 3\nI0107 13:58:25.629728 1729 log.go:172] (0xc0006761e0) (3) Data frame handling\nI0107 13:58:25.629768 1729 log.go:172] (0xc0006761e0) (3) Data frame sent\nI0107 13:58:25.631302 1729 log.go:172] (0xc00065e420) Data frame received for 5\nI0107 13:58:25.631372 1729 log.go:172] (0xc0005b28c0) (5) Data frame handling\nI0107 13:58:25.631404 1729 log.go:172] (0xc0005b28c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0107 13:58:25.803916 1729 log.go:172] (0xc00065e420) Data frame received for 1\nI0107 13:58:25.803983 1729 log.go:172] (0xc00065e420) (0xc0005b28c0) Stream removed, broadcasting: 5\nI0107 13:58:25.804320 1729 log.go:172] (0xc0005b2820) (1) Data frame handling\nI0107 13:58:25.804423 1729 log.go:172] (0xc0005b2820) (1) Data frame sent\nI0107 13:58:25.804527 1729 log.go:172] (0xc00065e420) (0xc0005b2820) Stream removed, broadcasting: 1\nI0107 13:58:25.805020 1729 log.go:172] (0xc00065e420) (0xc0006761e0) Stream removed, broadcasting: 3\nI0107 13:58:25.805105 1729 log.go:172] (0xc00065e420) Go away received\nI0107 13:58:25.805953 1729 log.go:172] (0xc00065e420) (0xc0005b2820) Stream removed, broadcasting: 1\nI0107 13:58:25.805980 1729 log.go:172] (0xc00065e420) (0xc0006761e0) Stream removed, broadcasting: 3\nI0107 13:58:25.805990 1729 log.go:172] (0xc00065e420) (0xc0005b28c0) Stream removed, broadcasting: 5\n" Jan 7 13:58:25.816: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 7 13:58:25.816: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 7 13:58:25.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 7 13:58:26.166: INFO: stderr: "I0107 13:58:25.963688 1748 log.go:172] (0xc0007ae420) (0xc000564820) Create stream\nI0107 13:58:25.964475 1748 log.go:172] (0xc0007ae420) (0xc000564820) Stream added, broadcasting: 1\nI0107 13:58:25.972817 1748 log.go:172] (0xc0007ae420) Reply frame received for 1\nI0107 13:58:25.972902 1748 log.go:172] (0xc0007ae420) (0xc000826000) Create stream\nI0107 13:58:25.972946 1748 log.go:172] (0xc0007ae420) (0xc000826000) Stream added, broadcasting: 3\nI0107 13:58:25.974919 1748 log.go:172] (0xc0007ae420) Reply frame received for 3\nI0107 13:58:25.975053 1748 log.go:172] (0xc0007ae420) (0xc0005648c0) Create stream\nI0107 13:58:25.975073 1748 log.go:172] (0xc0007ae420) (0xc0005648c0) Stream added, broadcasting: 5\nI0107 13:58:25.976557 1748 log.go:172] (0xc0007ae420) Reply frame received for 5\nI0107 13:58:26.050527 1748 log.go:172] (0xc0007ae420) Data frame received for 5\nI0107 13:58:26.050680 1748 log.go:172] (0xc0005648c0) (5) Data frame handling\nI0107 13:58:26.050721 1748 log.go:172] (0xc0005648c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0107 13:58:26.078796 1748 log.go:172] (0xc0007ae420) Data frame received for 3\nI0107 13:58:26.078826 1748 log.go:172] (0xc000826000) (3) Data frame handling\nI0107 13:58:26.078837 1748 log.go:172] (0xc000826000) (3) Data frame sent\nI0107 13:58:26.156590 1748 log.go:172] (0xc0007ae420) (0xc000826000) Stream removed, broadcasting: 3\nI0107 13:58:26.157198 1748 log.go:172] (0xc0007ae420) Data frame received for 1\nI0107 13:58:26.157256 1748 log.go:172] (0xc000564820) (1) Data frame handling\nI0107 13:58:26.157313 1748 log.go:172] (0xc000564820) (1) Data frame sent\nI0107 13:58:26.157343 1748 log.go:172] (0xc0007ae420) (0xc000564820) Stream removed, broadcasting: 1\nI0107 13:58:26.157435 1748 log.go:172] (0xc0007ae420) (0xc0005648c0) Stream removed, broadcasting: 5\nI0107 13:58:26.157538 1748 log.go:172] (0xc0007ae420) Go away received\nI0107 13:58:26.158391 1748 log.go:172] (0xc0007ae420) (0xc000564820) Stream removed, broadcasting: 1\nI0107 13:58:26.158408 1748 log.go:172] (0xc0007ae420) (0xc000826000) Stream removed, broadcasting: 3\nI0107 13:58:26.158413 1748 log.go:172] (0xc0007ae420) (0xc0005648c0) Stream removed, broadcasting: 5\n" Jan 7 13:58:26.167: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 7 13:58:26.167: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 7 13:58:26.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 7 13:58:27.000: INFO: stderr: "I0107 13:58:26.327094 1767 log.go:172] (0xc000a806e0) (0xc0008da960) Create stream\nI0107 13:58:26.327213 1767 log.go:172] (0xc000a806e0) (0xc0008da960) Stream added, broadcasting: 1\nI0107 13:58:26.342937 1767 log.go:172] (0xc000a806e0) Reply frame received for 1\nI0107 13:58:26.342985 1767 log.go:172] (0xc000a806e0) (0xc0008da000) Create stream\nI0107 13:58:26.342994 1767 log.go:172] (0xc000a806e0) (0xc0008da000) Stream added, broadcasting: 3\nI0107 13:58:26.344340 1767 log.go:172] (0xc000a806e0) Reply frame received for 3\nI0107 13:58:26.344367 1767 log.go:172] (0xc000a806e0) (0xc0005ec1e0) Create stream\nI0107 13:58:26.344387 1767 log.go:172] (0xc000a806e0) (0xc0005ec1e0) Stream added, broadcasting: 5\nI0107 13:58:26.346640 1767 log.go:172] (0xc000a806e0) Reply frame received for 5\nI0107 13:58:26.495754 1767 log.go:172] (0xc000a806e0) Data frame received for 5\nI0107 13:58:26.495881 1767 log.go:172] (0xc0005ec1e0) (5) Data frame handling\nI0107 13:58:26.495944 1767 log.go:172] (0xc0005ec1e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0107 13:58:26.569896 1767 log.go:172] (0xc000a806e0) Data frame received for 3\nI0107 13:58:26.570067 1767 log.go:172] (0xc0008da000) (3) Data frame handling\nI0107 13:58:26.570106 1767 log.go:172] (0xc0008da000) (3) Data frame sent\nI0107 13:58:26.975707 1767 log.go:172] (0xc000a806e0) Data frame received for 1\nI0107 13:58:26.976011 1767 log.go:172] (0xc000a806e0) (0xc0005ec1e0) Stream removed, broadcasting: 5\nI0107 13:58:26.976245 1767 log.go:172] (0xc0008da960) (1) Data frame handling\nI0107 13:58:26.976303 1767 log.go:172] (0xc0008da960) (1) Data frame sent\nI0107 13:58:26.976322 1767 log.go:172] (0xc000a806e0) (0xc0008da000) Stream removed, broadcasting: 3\nI0107 13:58:26.976398 1767 log.go:172] (0xc000a806e0) (0xc0008da960) Stream removed, broadcasting: 1\nI0107 13:58:26.976431 1767 log.go:172] (0xc000a806e0) Go away received\nI0107 13:58:26.977947 1767 log.go:172] (0xc000a806e0) (0xc0008da960) Stream removed, broadcasting: 1\nI0107 13:58:26.977971 1767 log.go:172] (0xc000a806e0) (0xc0008da000) Stream removed, broadcasting: 3\nI0107 13:58:26.977978 1767 log.go:172] (0xc000a806e0) (0xc0005ec1e0) Stream removed, broadcasting: 5\n" Jan 7 13:58:27.000: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 7 13:58:27.000: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 7 13:58:27.000: INFO: Waiting for statefulset status.replicas updated to 0 Jan 7 13:58:27.037: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 7 13:58:27.038: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 7 13:58:27.038: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 7 13:58:27.061: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 13:58:27.061: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:57:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:57:52 +0000 UTC }] Jan 7 13:58:27.061: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:27.061: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:27.061: INFO: Jan 7 13:58:27.061: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 7 13:58:28.468: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 13:58:28.468: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:57:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:57:52 +0000 UTC }] Jan 7 13:58:28.468: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:28.468: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:28.469: INFO: Jan 7 13:58:28.469: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 7 13:58:29.480: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 13:58:29.480: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:57:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:57:52 +0000 UTC }] Jan 7 13:58:29.480: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:29.480: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:29.480: INFO: Jan 7 13:58:29.480: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 7 13:58:30.920: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 13:58:30.920: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:57:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:57:52 +0000 UTC }] Jan 7 13:58:30.921: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:30.921: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:30.921: INFO: Jan 7 13:58:30.921: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 7 13:58:31.945: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 13:58:31.946: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:57:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:57:52 +0000 UTC }] Jan 7 13:58:31.946: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:31.946: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:31.946: INFO: Jan 7 13:58:31.946: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 7 13:58:32.953: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 13:58:32.954: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:57:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:57:52 +0000 UTC }] Jan 7 13:58:32.954: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:32.954: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:32.954: INFO: Jan 7 13:58:32.954: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 7 13:58:33.969: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 13:58:33.969: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:57:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:57:52 +0000 UTC }] Jan 7 13:58:33.969: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:33.969: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:33.969: INFO: Jan 7 13:58:33.969: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 7 13:58:34.977: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 13:58:34.977: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:57:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:57:52 +0000 UTC }] Jan 7 13:58:34.977: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:34.977: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:34.977: INFO: Jan 7 13:58:34.977: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 7 13:58:35.985: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 13:58:35.986: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:35.986: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:35.986: INFO: Jan 7 13:58:35.986: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 7 13:58:36.993: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 13:58:36.993: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:36.993: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 13:58:13 +0000 UTC }] Jan 7 13:58:36.993: INFO: Jan 7 13:58:36.993: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6059 Jan 7 13:58:38.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:58:38.261: INFO: rc: 1 Jan 7 13:58:38.262: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0024d6840 exit status 1 true [0xc0021c0098 0xc0021c00b0 0xc0021c00c8] [0xc0021c0098 0xc0021c00b0 0xc0021c00c8] [0xc0021c00a8 0xc0021c00c0] [0xba6c50 0xba6c50] 0xc002cd95c0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 7 13:58:48.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:58:48.422: INFO: rc: 1 Jan 7 13:58:48.423: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024d6930 exit status 1 true [0xc0021c00d0 0xc0021c00e8 0xc0021c0100] [0xc0021c00d0 0xc0021c00e8 0xc0021c0100] [0xc0021c00e0 0xc0021c00f8] [0xba6c50 0xba6c50] 0xc002cd9a40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:58:58.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:58:58.632: INFO: rc: 1 Jan 7 13:58:58.633: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024d6a20 exit status 1 true [0xc0021c0108 0xc0021c0120 0xc0021c0138] [0xc0021c0108 0xc0021c0120 0xc0021c0138] [0xc0021c0118 0xc0021c0130] [0xba6c50 0xba6c50] 0xc002cd9f20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:59:08.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:59:08.841: INFO: rc: 1 Jan 7 13:59:08.841: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00233c840 exit status 1 true [0xc002444000 0xc002444018 0xc002444030] [0xc002444000 0xc002444018 0xc002444030] [0xc002444010 0xc002444028] [0xba6c50 0xba6c50] 0xc002b083c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:59:18.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:59:20.966: INFO: rc: 1 Jan 7 13:59:20.966: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024d6ae0 exit status 1 true [0xc0021c0148 0xc0021c0160 0xc0021c0178] [0xc0021c0148 0xc0021c0160 0xc0021c0178] [0xc0021c0158 0xc0021c0170] [0xba6c50 0xba6c50] 0xc0028362a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:59:30.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:59:31.219: INFO: rc: 1 Jan 7 13:59:31.219: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0029aa930 exit status 1 true [0xc002c1e028 0xc002c1e040 0xc002c1e058] [0xc002c1e028 0xc002c1e040 0xc002c1e058] [0xc002c1e038 0xc002c1e050] [0xba6c50 0xba6c50] 0xc0025da960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:59:41.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:59:41.355: INFO: rc: 1 Jan 7 13:59:41.355: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001546b10 exit status 1 true [0xc0000ebd90 0xc0000ebde0 0xc0000ebf68] [0xc0000ebd90 0xc0000ebde0 0xc0000ebf68] [0xc0000ebdb0 0xc0000ebf60] [0xba6c50 0xba6c50] 0xc001f5c540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 13:59:51.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 13:59:51.551: INFO: rc: 1 Jan 7 13:59:51.552: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00233c930 exit status 1 true [0xc002444038 0xc002444050 0xc002444068] [0xc002444038 0xc002444050 0xc002444068] [0xc002444048 0xc002444060] [0xba6c50 0xba6c50] 0xc002b086c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:00:01.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:00:01.761: INFO: rc: 1 Jan 7 14:00:01.762: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00233c9f0 exit status 1 true [0xc002444070 0xc002444088 0xc0024440a0] [0xc002444070 0xc002444088 0xc0024440a0] [0xc002444080 0xc002444098] [0xba6c50 0xba6c50] 0xc002b089c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:00:11.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:00:11.977: INFO: rc: 1 Jan 7 14:00:11.978: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024d6c00 exit status 1 true [0xc0021c0180 0xc0021c0198 0xc0021c01b0] [0xc0021c0180 0xc0021c0198 0xc0021c01b0] [0xc0021c0190 0xc0021c01a8] [0xba6c50 0xba6c50] 0xc002836600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:00:21.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:00:22.264: INFO: rc: 1 Jan 7 14:00:22.265: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024d6cf0 exit status 1 true [0xc0021c01b8 0xc0021c01d0 0xc0021c01e8] [0xc0021c01b8 0xc0021c01d0 0xc0021c01e8] [0xc0021c01c8 0xc0021c01e0] [0xba6c50 0xba6c50] 0xc002836900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:00:32.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:00:32.436: INFO: rc: 1 Jan 7 14:00:32.437: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00269e090 exit status 1 true [0xc002202008 0xc002202020 0xc002202038] [0xc002202008 0xc002202020 0xc002202038] [0xc002202018 0xc002202030] [0xba6c50 0xba6c50] 0xc0016d6180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:00:42.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:00:42.581: INFO: rc: 1 Jan 7 14:00:42.582: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f3a150 exit status 1 true [0xc00062c108 0xc002202050 0xc002202068] [0xc00062c108 0xc002202050 0xc002202068] [0xc002202048 0xc002202060] [0xba6c50 0xba6c50] 0xc0016d6b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:00:52.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:00:52.743: INFO: rc: 1 Jan 7 14:00:52.743: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0029aa0c0 exit status 1 true [0xc002c1e000 0xc002c1e018 0xc002c1e030] [0xc002c1e000 0xc002c1e018 0xc002c1e030] [0xc002c1e010 0xc002c1e028] [0xba6c50 0xba6c50] 0xc002fe8f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:01:02.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:01:02.923: INFO: rc: 1 Jan 7 14:01:02.924: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f3a210 exit status 1 true [0xc002202070 0xc002202088 0xc0022020a0] [0xc002202070 0xc002202088 0xc0022020a0] [0xc002202080 0xc002202098] [0xba6c50 0xba6c50] 0xc0016d6f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:01:12.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:01:13.155: INFO: rc: 1 Jan 7 14:01:13.156: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f3a2d0 exit status 1 true [0xc0022020a8 0xc0022020c0 0xc0022020d8] [0xc0022020a8 0xc0022020c0 0xc0022020d8] [0xc0022020b8 0xc0022020d0] [0xba6c50 0xba6c50] 0xc0016d7440 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:01:23.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:01:23.379: INFO: rc: 1 Jan 7 14:01:23.379: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00269e360 exit status 1 true [0xc0000eaa70 0xc0000eafb0 0xc0000eb190] [0xc0000eaa70 0xc0000eafb0 0xc0000eb190] [0xc0000eaea8 0xc0000eb160] [0xba6c50 0xba6c50] 0xc0025da4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:01:33.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:01:33.601: INFO: rc: 1 Jan 7 14:01:33.602: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00269e420 exit status 1 true [0xc0000eb230 0xc0000eb9a0 0xc0000ebb98] [0xc0000eb230 0xc0000eb9a0 0xc0000ebb98] [0xc0000eb6c8 0xc0000ebb30] [0xba6c50 0xba6c50] 0xc0025daa80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:01:43.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:01:43.779: INFO: rc: 1 Jan 7 14:01:43.779: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f3a3f0 exit status 1 true [0xc0022020e0 0xc0022020f8 0xc002202110] [0xc0022020e0 0xc0022020f8 0xc002202110] [0xc0022020f0 0xc002202108] [0xba6c50 0xba6c50] 0xc0016d7800 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:01:53.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:01:53.990: INFO: rc: 1 Jan 7 14:01:53.990: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f3a4b0 exit status 1 true [0xc002202118 0xc002202130 0xc002202148] [0xc002202118 0xc002202130 0xc002202148] [0xc002202128 0xc002202140] [0xba6c50 0xba6c50] 0xc0016d7da0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:02:03.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:02:04.148: INFO: rc: 1 Jan 7 14:02:04.148: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00269e510 exit status 1 true [0xc0000ebba8 0xc0000ebda0 0xc0000ebed0] [0xc0000ebba8 0xc0000ebda0 0xc0000ebed0] [0xc0000ebd90 0xc0000ebde0] [0xba6c50 0xba6c50] 0xc0025dae40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:02:14.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:02:14.289: INFO: rc: 1 Jan 7 14:02:14.289: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000f7a0f0 exit status 1 true [0xc002444000 0xc002444018 0xc002444030] [0xc002444000 0xc002444018 0xc002444030] [0xc002444010 0xc002444028] [0xba6c50 0xba6c50] 0xc002cd87e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:02:24.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:02:24.411: INFO: rc: 1 Jan 7 14:02:24.412: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000f7a210 exit status 1 true [0xc002444038 0xc002444050 0xc002444068] [0xc002444038 0xc002444050 0xc002444068] [0xc002444048 0xc002444060] [0xba6c50 0xba6c50] 0xc002cd8d80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:02:34.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:02:34.620: INFO: rc: 1 Jan 7 14:02:34.620: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000f7a090 exit status 1 true [0xc002444000 0xc002444018 0xc002444030] [0xc002444000 0xc002444018 0xc002444030] [0xc002444010 0xc002444028] [0xba6c50 0xba6c50] 0xc002cd87e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:02:44.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:02:44.763: INFO: rc: 1 Jan 7 14:02:44.763: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00269e2a0 exit status 1 true [0xc002202000 0xc002202018 0xc002202030] [0xc002202000 0xc002202018 0xc002202030] [0xc002202010 0xc002202028] [0xba6c50 0xba6c50] 0xc0016d68a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:02:54.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:02:54.956: INFO: rc: 1 Jan 7 14:02:54.957: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00269e3c0 exit status 1 true [0xc002202038 0xc002202050 0xc002202068] [0xc002202038 0xc002202050 0xc002202068] [0xc002202048 0xc002202060] [0xba6c50 0xba6c50] 0xc0016d6e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:03:04.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:03:05.152: INFO: rc: 1 Jan 7 14:03:05.152: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f3a0c0 exit status 1 true [0xc0000eaa70 0xc0000eafb0 0xc0000eb190] [0xc0000eaa70 0xc0000eafb0 0xc0000eb190] [0xc0000eaea8 0xc0000eb160] [0xba6c50 0xba6c50] 0xc0025da4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:03:15.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:03:15.321: INFO: rc: 1 Jan 7 14:03:15.322: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0029aa0f0 exit status 1 true [0xc002c1e000 0xc002c1e018 0xc002c1e030] [0xc002c1e000 0xc002c1e018 0xc002c1e030] [0xc002c1e010 0xc002c1e028] [0xba6c50 0xba6c50] 0xc002fe83c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:03:25.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:03:25.496: INFO: rc: 1 Jan 7 14:03:25.497: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f3a1e0 exit status 1 true [0xc0000eb230 0xc0000eb9a0 0xc0000ebb98] [0xc0000eb230 0xc0000eb9a0 0xc0000ebb98] [0xc0000eb6c8 0xc0000ebb30] [0xba6c50 0xba6c50] 0xc0025daa80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:03:35.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:03:35.669: INFO: rc: 1 Jan 7 14:03:35.670: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00269e4e0 exit status 1 true [0xc002202070 0xc002202088 0xc0022020a0] [0xc002202070 0xc002202088 0xc0022020a0] [0xc002202080 0xc002202098] [0xba6c50 0xba6c50] 0xc0016d7320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 7 14:03:45.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6059 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 14:03:45.858: INFO: rc: 1 Jan 7 14:03:45.859: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Jan 7 14:03:45.859: INFO: Scaling statefulset ss to 0 Jan 7 14:03:45.874: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 7 14:03:45.877: INFO: Deleting all statefulset in ns statefulset-6059 Jan 7 14:03:45.879: INFO: Scaling statefulset ss to 0 Jan 7 14:03:45.888: INFO: Waiting for statefulset status.replicas updated to 0 Jan 7 14:03:45.891: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:03:45.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6059" for this suite. Jan 7 14:03:51.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:03:52.134: INFO: namespace statefulset-6059 deletion completed in 6.217571717s • [SLOW TEST:359.871 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:03:52.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 7 14:03:52.271: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:03:53.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4276" for this suite. Jan 7 14:03:59.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:03:59.565: INFO: namespace custom-resource-definition-4276 deletion completed in 6.176895108s • [SLOW TEST:7.430 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:03:59.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 7 14:04:17.902: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 7 14:04:17.935: INFO: Pod pod-with-prestop-exec-hook still exists Jan 7 14:04:19.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 7 14:04:19.943: INFO: Pod pod-with-prestop-exec-hook still exists Jan 7 14:04:21.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 7 14:04:21.950: INFO: Pod pod-with-prestop-exec-hook still exists Jan 7 14:04:23.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 7 14:04:23.951: INFO: Pod pod-with-prestop-exec-hook still exists Jan 7 14:04:25.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 7 14:04:25.946: INFO: Pod pod-with-prestop-exec-hook still exists Jan 7 14:04:27.937: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 7 14:04:27.958: INFO: Pod pod-with-prestop-exec-hook still exists Jan 7 14:04:29.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 7 14:04:29.946: INFO: Pod pod-with-prestop-exec-hook still exists Jan 7 14:04:31.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 7 14:04:31.945: INFO: Pod pod-with-prestop-exec-hook still exists Jan 7 14:04:33.937: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 7 14:04:33.985: INFO: Pod pod-with-prestop-exec-hook still exists Jan 7 14:04:35.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 7 14:04:35.950: INFO: Pod pod-with-prestop-exec-hook still exists Jan 7 14:04:37.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 7 14:04:37.948: INFO: Pod pod-with-prestop-exec-hook still exists Jan 7 14:04:39.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 7 14:04:39.945: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:04:39.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8739" for this suite. Jan 7 14:05:02.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:05:02.150: INFO: namespace container-lifecycle-hook-8739 deletion completed in 22.16714105s • [SLOW TEST:62.584 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:05:02.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-5dgd STEP: Creating a pod to test atomic-volume-subpath Jan 7 14:05:02.320: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5dgd" in namespace "subpath-9581" to be "success or failure" Jan 7 14:05:02.417: INFO: Pod "pod-subpath-test-configmap-5dgd": Phase="Pending", Reason="", readiness=false. Elapsed: 96.024957ms Jan 7 14:05:04.428: INFO: Pod "pod-subpath-test-configmap-5dgd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107049622s Jan 7 14:05:06.441: INFO: Pod "pod-subpath-test-configmap-5dgd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12041389s Jan 7 14:05:08.452: INFO: Pod "pod-subpath-test-configmap-5dgd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13070231s Jan 7 14:05:10.504: INFO: Pod "pod-subpath-test-configmap-5dgd": Phase="Running", Reason="", readiness=true. Elapsed: 8.182901994s Jan 7 14:05:12.524: INFO: Pod "pod-subpath-test-configmap-5dgd": Phase="Running", Reason="", readiness=true. Elapsed: 10.203265798s Jan 7 14:05:14.543: INFO: Pod "pod-subpath-test-configmap-5dgd": Phase="Running", Reason="", readiness=true. Elapsed: 12.222117406s Jan 7 14:05:16.561: INFO: Pod "pod-subpath-test-configmap-5dgd": Phase="Running", Reason="", readiness=true. Elapsed: 14.23991463s Jan 7 14:05:18.572: INFO: Pod "pod-subpath-test-configmap-5dgd": Phase="Running", Reason="", readiness=true. Elapsed: 16.25129362s Jan 7 14:05:20.581: INFO: Pod "pod-subpath-test-configmap-5dgd": Phase="Running", Reason="", readiness=true. Elapsed: 18.259796101s Jan 7 14:05:22.598: INFO: Pod "pod-subpath-test-configmap-5dgd": Phase="Running", Reason="", readiness=true. Elapsed: 20.277194973s Jan 7 14:05:24.613: INFO: Pod "pod-subpath-test-configmap-5dgd": Phase="Running", Reason="", readiness=true. Elapsed: 22.291970971s Jan 7 14:05:26.619: INFO: Pod "pod-subpath-test-configmap-5dgd": Phase="Running", Reason="", readiness=true. Elapsed: 24.298661036s Jan 7 14:05:28.629: INFO: Pod "pod-subpath-test-configmap-5dgd": Phase="Running", Reason="", readiness=true. Elapsed: 26.307973152s Jan 7 14:05:30.648: INFO: Pod "pod-subpath-test-configmap-5dgd": Phase="Running", Reason="", readiness=true. Elapsed: 28.32693811s Jan 7 14:05:32.662: INFO: Pod "pod-subpath-test-configmap-5dgd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.340922225s STEP: Saw pod success Jan 7 14:05:32.662: INFO: Pod "pod-subpath-test-configmap-5dgd" satisfied condition "success or failure" Jan 7 14:05:32.667: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-5dgd container test-container-subpath-configmap-5dgd: STEP: delete the pod Jan 7 14:05:32.783: INFO: Waiting for pod pod-subpath-test-configmap-5dgd to disappear Jan 7 14:05:32.787: INFO: Pod pod-subpath-test-configmap-5dgd no longer exists STEP: Deleting pod pod-subpath-test-configmap-5dgd Jan 7 14:05:32.787: INFO: Deleting pod "pod-subpath-test-configmap-5dgd" in namespace "subpath-9581" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:05:32.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9581" for this suite. Jan 7 14:05:38.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:05:39.008: INFO: namespace subpath-9581 deletion completed in 6.215163817s • [SLOW TEST:36.858 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:05:39.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 7 14:05:39.252: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec714192-1513-4239-ad59-7bb8f062d1ce" in namespace "projected-7758" to be "success or failure" Jan 7 14:05:39.270: INFO: Pod "downwardapi-volume-ec714192-1513-4239-ad59-7bb8f062d1ce": Phase="Pending", Reason="", readiness=false. Elapsed: 18.174151ms Jan 7 14:05:41.283: INFO: Pod "downwardapi-volume-ec714192-1513-4239-ad59-7bb8f062d1ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031286455s Jan 7 14:05:43.297: INFO: Pod "downwardapi-volume-ec714192-1513-4239-ad59-7bb8f062d1ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044915221s Jan 7 14:05:45.310: INFO: Pod "downwardapi-volume-ec714192-1513-4239-ad59-7bb8f062d1ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057843637s Jan 7 14:05:47.324: INFO: Pod "downwardapi-volume-ec714192-1513-4239-ad59-7bb8f062d1ce": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071545762s Jan 7 14:05:49.335: INFO: Pod "downwardapi-volume-ec714192-1513-4239-ad59-7bb8f062d1ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082725969s STEP: Saw pod success Jan 7 14:05:49.335: INFO: Pod "downwardapi-volume-ec714192-1513-4239-ad59-7bb8f062d1ce" satisfied condition "success or failure" Jan 7 14:05:49.341: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ec714192-1513-4239-ad59-7bb8f062d1ce container client-container: STEP: delete the pod Jan 7 14:05:49.605: INFO: Waiting for pod downwardapi-volume-ec714192-1513-4239-ad59-7bb8f062d1ce to disappear Jan 7 14:05:49.612: INFO: Pod downwardapi-volume-ec714192-1513-4239-ad59-7bb8f062d1ce no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:05:49.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7758" for this suite. Jan 7 14:05:55.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:05:55.907: INFO: namespace projected-7758 deletion completed in 6.284872654s • [SLOW TEST:16.898 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:05:55.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-2f2c081a-ace5-4187-94ba-81d8f2c3ec0c STEP: Creating a pod to test consume configMaps Jan 7 14:05:56.028: INFO: Waiting up to 5m0s for pod "pod-configmaps-32f28874-4b87-4fa9-b20c-7e9963f8634f" in namespace "configmap-5181" to be "success or failure" Jan 7 14:05:56.034: INFO: Pod "pod-configmaps-32f28874-4b87-4fa9-b20c-7e9963f8634f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.201772ms Jan 7 14:05:58.044: INFO: Pod "pod-configmaps-32f28874-4b87-4fa9-b20c-7e9963f8634f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015941371s Jan 7 14:06:00.054: INFO: Pod "pod-configmaps-32f28874-4b87-4fa9-b20c-7e9963f8634f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025923342s Jan 7 14:06:02.061: INFO: Pod "pod-configmaps-32f28874-4b87-4fa9-b20c-7e9963f8634f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033222678s Jan 7 14:06:04.076: INFO: Pod "pod-configmaps-32f28874-4b87-4fa9-b20c-7e9963f8634f": Phase="Running", Reason="", readiness=true. Elapsed: 8.047928337s Jan 7 14:06:06.084: INFO: Pod "pod-configmaps-32f28874-4b87-4fa9-b20c-7e9963f8634f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056016797s STEP: Saw pod success Jan 7 14:06:06.084: INFO: Pod "pod-configmaps-32f28874-4b87-4fa9-b20c-7e9963f8634f" satisfied condition "success or failure" Jan 7 14:06:06.088: INFO: Trying to get logs from node iruya-node pod pod-configmaps-32f28874-4b87-4fa9-b20c-7e9963f8634f container configmap-volume-test: STEP: delete the pod Jan 7 14:06:06.317: INFO: Waiting for pod pod-configmaps-32f28874-4b87-4fa9-b20c-7e9963f8634f to disappear Jan 7 14:06:06.332: INFO: Pod pod-configmaps-32f28874-4b87-4fa9-b20c-7e9963f8634f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:06:06.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5181" for this suite. Jan 7 14:06:12.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:06:12.463: INFO: namespace configmap-5181 deletion completed in 6.122582669s • [SLOW TEST:16.555 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:06:12.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-96b092df-b116-46d1-b45f-d0d48b7033a8 STEP: Creating secret with name s-test-opt-upd-33144616-9176-4f0b-bb8e-f470e8127c2a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-96b092df-b116-46d1-b45f-d0d48b7033a8 STEP: Updating secret s-test-opt-upd-33144616-9176-4f0b-bb8e-f470e8127c2a STEP: Creating secret with name s-test-opt-create-a1c1ef20-7366-4808-9948-72cf418fbe51 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:06:26.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6055" for this suite. Jan 7 14:06:49.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:06:49.231: INFO: namespace projected-6055 deletion completed in 22.247163211s • [SLOW TEST:36.768 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:06:49.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-3a97fb2f-e721-43c1-9bfa-f1c15405256e STEP: Creating a pod to test consume secrets Jan 7 14:06:49.300: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-53dd976f-6f6e-4dd9-bd03-ddbb4da0aac7" in namespace "projected-1597" to be "success or failure" Jan 7 14:06:49.374: INFO: Pod "pod-projected-secrets-53dd976f-6f6e-4dd9-bd03-ddbb4da0aac7": Phase="Pending", Reason="", readiness=false. Elapsed: 74.055398ms Jan 7 14:06:51.384: INFO: Pod "pod-projected-secrets-53dd976f-6f6e-4dd9-bd03-ddbb4da0aac7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084021603s Jan 7 14:06:53.397: INFO: Pod "pod-projected-secrets-53dd976f-6f6e-4dd9-bd03-ddbb4da0aac7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097172296s Jan 7 14:06:55.415: INFO: Pod "pod-projected-secrets-53dd976f-6f6e-4dd9-bd03-ddbb4da0aac7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114782596s Jan 7 14:06:57.425: INFO: Pod "pod-projected-secrets-53dd976f-6f6e-4dd9-bd03-ddbb4da0aac7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.125199774s Jan 7 14:06:59.436: INFO: Pod "pod-projected-secrets-53dd976f-6f6e-4dd9-bd03-ddbb4da0aac7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.135763772s STEP: Saw pod success Jan 7 14:06:59.436: INFO: Pod "pod-projected-secrets-53dd976f-6f6e-4dd9-bd03-ddbb4da0aac7" satisfied condition "success or failure" Jan 7 14:06:59.443: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-53dd976f-6f6e-4dd9-bd03-ddbb4da0aac7 container projected-secret-volume-test: STEP: delete the pod Jan 7 14:06:59.613: INFO: Waiting for pod pod-projected-secrets-53dd976f-6f6e-4dd9-bd03-ddbb4da0aac7 to disappear Jan 7 14:06:59.626: INFO: Pod pod-projected-secrets-53dd976f-6f6e-4dd9-bd03-ddbb4da0aac7 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:06:59.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1597" for this suite. Jan 7 14:07:05.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:07:05.849: INFO: namespace projected-1597 deletion completed in 6.2171656s • [SLOW TEST:16.618 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:07:05.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-5880f612-fe3d-432f-9afc-b68772e29dcb STEP: Creating a pod to test consume secrets Jan 7 14:07:06.012: INFO: Waiting up to 5m0s for pod "pod-secrets-43491002-e903-4faf-9fca-104297c068d5" in namespace "secrets-8516" to be "success or failure" Jan 7 14:07:06.021: INFO: Pod "pod-secrets-43491002-e903-4faf-9fca-104297c068d5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.938875ms Jan 7 14:07:08.034: INFO: Pod "pod-secrets-43491002-e903-4faf-9fca-104297c068d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021702s Jan 7 14:07:10.045: INFO: Pod "pod-secrets-43491002-e903-4faf-9fca-104297c068d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033177369s Jan 7 14:07:12.054: INFO: Pod "pod-secrets-43491002-e903-4faf-9fca-104297c068d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041924439s Jan 7 14:07:14.066: INFO: Pod "pod-secrets-43491002-e903-4faf-9fca-104297c068d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054007019s STEP: Saw pod success Jan 7 14:07:14.066: INFO: Pod "pod-secrets-43491002-e903-4faf-9fca-104297c068d5" satisfied condition "success or failure" Jan 7 14:07:14.071: INFO: Trying to get logs from node iruya-node pod pod-secrets-43491002-e903-4faf-9fca-104297c068d5 container secret-volume-test: STEP: delete the pod Jan 7 14:07:14.180: INFO: Waiting for pod pod-secrets-43491002-e903-4faf-9fca-104297c068d5 to disappear Jan 7 14:07:14.197: INFO: Pod pod-secrets-43491002-e903-4faf-9fca-104297c068d5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:07:14.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8516" for this suite. Jan 7 14:07:20.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:07:20.365: INFO: namespace secrets-8516 deletion completed in 6.154124695s • [SLOW TEST:14.514 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:07:20.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:07:28.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9373" for this suite. Jan 7 14:08:10.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:08:10.753: INFO: namespace kubelet-test-9373 deletion completed in 42.183195154s • [SLOW TEST:50.388 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:08:10.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 7 14:08:11.129: INFO: Number of nodes with available pods: 0 Jan 7 14:08:11.130: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:12.678: INFO: Number of nodes with available pods: 0 Jan 7 14:08:12.678: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:13.211: INFO: Number of nodes with available pods: 0 Jan 7 14:08:13.211: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:14.148: INFO: Number of nodes with available pods: 0 Jan 7 14:08:14.148: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:15.179: INFO: Number of nodes with available pods: 0 Jan 7 14:08:15.179: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:16.832: INFO: Number of nodes with available pods: 0 Jan 7 14:08:16.832: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:17.234: INFO: Number of nodes with available pods: 0 Jan 7 14:08:17.235: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:18.148: INFO: Number of nodes with available pods: 0 Jan 7 14:08:18.148: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:19.154: INFO: Number of nodes with available pods: 0 Jan 7 14:08:19.154: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:20.143: INFO: Number of nodes with available pods: 1 Jan 7 14:08:20.143: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:21.189: INFO: Number of nodes with available pods: 2 Jan 7 14:08:21.189: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 7 14:08:21.232: INFO: Number of nodes with available pods: 1 Jan 7 14:08:21.232: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:22.249: INFO: Number of nodes with available pods: 1 Jan 7 14:08:22.250: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:23.260: INFO: Number of nodes with available pods: 1 Jan 7 14:08:23.260: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:24.242: INFO: Number of nodes with available pods: 1 Jan 7 14:08:24.242: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:25.250: INFO: Number of nodes with available pods: 1 Jan 7 14:08:25.250: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:26.250: INFO: Number of nodes with available pods: 1 Jan 7 14:08:26.250: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:27.263: INFO: Number of nodes with available pods: 1 Jan 7 14:08:27.263: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:28.248: INFO: Number of nodes with available pods: 1 Jan 7 14:08:28.249: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:29.252: INFO: Number of nodes with available pods: 1 Jan 7 14:08:29.252: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:30.258: INFO: Number of nodes with available pods: 1 Jan 7 14:08:30.258: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:31.262: INFO: Number of nodes with available pods: 1 Jan 7 14:08:31.262: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:32.252: INFO: Number of nodes with available pods: 1 Jan 7 14:08:32.252: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:33.279: INFO: Number of nodes with available pods: 1 Jan 7 14:08:33.279: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:34.250: INFO: Number of nodes with available pods: 1 Jan 7 14:08:34.250: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:35.247: INFO: Number of nodes with available pods: 1 Jan 7 14:08:35.248: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:36.270: INFO: Number of nodes with available pods: 1 Jan 7 14:08:36.271: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:37.247: INFO: Number of nodes with available pods: 1 Jan 7 14:08:37.247: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:38.258: INFO: Number of nodes with available pods: 1 Jan 7 14:08:38.259: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:39.248: INFO: Number of nodes with available pods: 1 Jan 7 14:08:39.248: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:40.245: INFO: Number of nodes with available pods: 1 Jan 7 14:08:40.245: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:41.255: INFO: Number of nodes with available pods: 1 Jan 7 14:08:41.255: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:42.285: INFO: Number of nodes with available pods: 1 Jan 7 14:08:42.285: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:43.254: INFO: Number of nodes with available pods: 1 Jan 7 14:08:43.254: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:44.244: INFO: Number of nodes with available pods: 1 Jan 7 14:08:44.245: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:45.314: INFO: Number of nodes with available pods: 1 Jan 7 14:08:45.314: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:08:46.249: INFO: Number of nodes with available pods: 2 Jan 7 14:08:46.249: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5547, will wait for the garbage collector to delete the pods Jan 7 14:08:46.343: INFO: Deleting DaemonSet.extensions daemon-set took: 34.431761ms Jan 7 14:08:46.643: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.552957ms Jan 7 14:08:56.652: INFO: Number of nodes with available pods: 0 Jan 7 14:08:56.652: INFO: Number of running nodes: 0, number of available pods: 0 Jan 7 14:08:56.656: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5547/daemonsets","resourceVersion":"19657793"},"items":null} Jan 7 14:08:56.659: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5547/pods","resourceVersion":"19657793"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:08:56.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5547" for this suite. Jan 7 14:09:02.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:09:02.834: INFO: namespace daemonsets-5547 deletion completed in 6.158303664s • [SLOW TEST:52.081 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:09:02.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 7 14:09:03.057: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ad87cb29-f8ae-48d0-85f3-cc2a63b56b68", Controller:(*bool)(0xc0000597da), BlockOwnerDeletion:(*bool)(0xc0000597db)}} Jan 7 14:09:03.123: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c9d78a62-280b-4a03-891a-5e1d0d95d3c6", Controller:(*bool)(0xc00215e522), BlockOwnerDeletion:(*bool)(0xc00215e523)}} Jan 7 14:09:03.281: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ae9683f3-c2ec-453b-b162-83b8de3785fa", Controller:(*bool)(0xc00215e6d2), BlockOwnerDeletion:(*bool)(0xc00215e6d3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:09:08.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8599" for this suite. Jan 7 14:09:14.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:09:14.859: INFO: namespace gc-8599 deletion completed in 6.536367116s • [SLOW TEST:12.024 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:09:14.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 7 14:09:14.991: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:09:28.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8285" for this suite. Jan 7 14:09:34.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:09:34.658: INFO: namespace init-container-8285 deletion completed in 6.114975376s • [SLOW TEST:19.798 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:09:34.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-70f820a3-1b2a-4728-ae01-99c520052d30 STEP: Creating secret with name s-test-opt-upd-8b4bb3b1-929b-45d5-aa1f-4f8a0a98196f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-70f820a3-1b2a-4728-ae01-99c520052d30 STEP: Updating secret s-test-opt-upd-8b4bb3b1-929b-45d5-aa1f-4f8a0a98196f STEP: Creating secret with name s-test-opt-create-1aad9973-a2ba-4f01-a70a-c2acd4c030f3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:11:20.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1574" for this suite. Jan 7 14:11:42.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:11:43.068: INFO: namespace secrets-1574 deletion completed in 22.161983415s • [SLOW TEST:128.410 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:11:43.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 7 14:11:43.232: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5163dc3b-84b9-4fb7-827e-9d691abda04b" in namespace "downward-api-73" to be "success or failure" Jan 7 14:11:43.246: INFO: Pod "downwardapi-volume-5163dc3b-84b9-4fb7-827e-9d691abda04b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.649624ms Jan 7 14:11:45.253: INFO: Pod "downwardapi-volume-5163dc3b-84b9-4fb7-827e-9d691abda04b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020380364s Jan 7 14:11:47.261: INFO: Pod "downwardapi-volume-5163dc3b-84b9-4fb7-827e-9d691abda04b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02921078s Jan 7 14:11:49.275: INFO: Pod "downwardapi-volume-5163dc3b-84b9-4fb7-827e-9d691abda04b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043026957s Jan 7 14:11:51.285: INFO: Pod "downwardapi-volume-5163dc3b-84b9-4fb7-827e-9d691abda04b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052981909s Jan 7 14:11:53.297: INFO: Pod "downwardapi-volume-5163dc3b-84b9-4fb7-827e-9d691abda04b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064416948s STEP: Saw pod success Jan 7 14:11:53.297: INFO: Pod "downwardapi-volume-5163dc3b-84b9-4fb7-827e-9d691abda04b" satisfied condition "success or failure" Jan 7 14:11:53.302: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-5163dc3b-84b9-4fb7-827e-9d691abda04b container client-container: STEP: delete the pod Jan 7 14:11:53.458: INFO: Waiting for pod downwardapi-volume-5163dc3b-84b9-4fb7-827e-9d691abda04b to disappear Jan 7 14:11:53.471: INFO: Pod downwardapi-volume-5163dc3b-84b9-4fb7-827e-9d691abda04b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:11:53.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-73" for this suite. Jan 7 14:11:59.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:11:59.648: INFO: namespace downward-api-73 deletion completed in 6.168220704s • [SLOW TEST:16.579 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:11:59.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1334 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 7 14:11:59.738: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 7 14:12:32.117: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1334 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 7 14:12:32.117: INFO: >>> kubeConfig: /root/.kube/config I0107 14:12:32.215860 8 log.go:172] (0xc0024f4840) (0xc002ad94a0) Create stream I0107 14:12:32.216205 8 log.go:172] (0xc0024f4840) (0xc002ad94a0) Stream added, broadcasting: 1 I0107 14:12:32.235977 8 log.go:172] (0xc0024f4840) Reply frame received for 1 I0107 14:12:32.236210 8 log.go:172] (0xc0024f4840) (0xc002ad9540) Create stream I0107 14:12:32.236252 8 log.go:172] (0xc0024f4840) (0xc002ad9540) Stream added, broadcasting: 3 I0107 14:12:32.245551 8 log.go:172] (0xc0024f4840) Reply frame received for 3 I0107 14:12:32.245708 8 log.go:172] (0xc0024f4840) (0xc001f9e640) Create stream I0107 14:12:32.245739 8 log.go:172] (0xc0024f4840) (0xc001f9e640) Stream added, broadcasting: 5 I0107 14:12:32.248890 8 log.go:172] (0xc0024f4840) Reply frame received for 5 I0107 14:12:32.417678 8 log.go:172] (0xc0024f4840) Data frame received for 3 I0107 14:12:32.417824 8 log.go:172] (0xc002ad9540) (3) Data frame handling I0107 14:12:32.417853 8 log.go:172] (0xc002ad9540) (3) Data frame sent I0107 14:12:32.642160 8 log.go:172] (0xc0024f4840) (0xc002ad9540) Stream removed, broadcasting: 3 I0107 14:12:32.642484 8 log.go:172] (0xc0024f4840) Data frame received for 1 I0107 14:12:32.642514 8 log.go:172] (0xc002ad94a0) (1) Data frame handling I0107 14:12:32.642579 8 log.go:172] (0xc002ad94a0) (1) Data frame sent I0107 14:12:32.642589 8 log.go:172] (0xc0024f4840) (0xc002ad94a0) Stream removed, broadcasting: 1 I0107 14:12:32.642622 8 log.go:172] (0xc0024f4840) (0xc001f9e640) Stream removed, broadcasting: 5 I0107 14:12:32.642725 8 log.go:172] (0xc0024f4840) Go away received I0107 14:12:32.643197 8 log.go:172] (0xc0024f4840) (0xc002ad94a0) Stream removed, broadcasting: 1 I0107 14:12:32.643223 8 log.go:172] (0xc0024f4840) (0xc002ad9540) Stream removed, broadcasting: 3 I0107 14:12:32.643232 8 log.go:172] (0xc0024f4840) (0xc001f9e640) Stream removed, broadcasting: 5 Jan 7 14:12:32.643: INFO: Found all expected endpoints: [netserver-0] Jan 7 14:12:32.655: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1334 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 7 14:12:32.655: INFO: >>> kubeConfig: /root/.kube/config I0107 14:12:32.713513 8 log.go:172] (0xc0024f54a0) (0xc002ad9900) Create stream I0107 14:12:32.713595 8 log.go:172] (0xc0024f54a0) (0xc002ad9900) Stream added, broadcasting: 1 I0107 14:12:32.718479 8 log.go:172] (0xc0024f54a0) Reply frame received for 1 I0107 14:12:32.718510 8 log.go:172] (0xc0024f54a0) (0xc001d4c3c0) Create stream I0107 14:12:32.718523 8 log.go:172] (0xc0024f54a0) (0xc001d4c3c0) Stream added, broadcasting: 3 I0107 14:12:32.719723 8 log.go:172] (0xc0024f54a0) Reply frame received for 3 I0107 14:12:32.719740 8 log.go:172] (0xc0024f54a0) (0xc001c928c0) Create stream I0107 14:12:32.719747 8 log.go:172] (0xc0024f54a0) (0xc001c928c0) Stream added, broadcasting: 5 I0107 14:12:32.720888 8 log.go:172] (0xc0024f54a0) Reply frame received for 5 I0107 14:12:32.816662 8 log.go:172] (0xc0024f54a0) Data frame received for 3 I0107 14:12:32.816693 8 log.go:172] (0xc001d4c3c0) (3) Data frame handling I0107 14:12:32.816717 8 log.go:172] (0xc001d4c3c0) (3) Data frame sent I0107 14:12:33.002281 8 log.go:172] (0xc0024f54a0) Data frame received for 1 I0107 14:12:33.002485 8 log.go:172] (0xc0024f54a0) (0xc001c928c0) Stream removed, broadcasting: 5 I0107 14:12:33.002588 8 log.go:172] (0xc002ad9900) (1) Data frame handling I0107 14:12:33.002633 8 log.go:172] (0xc0024f54a0) (0xc001d4c3c0) Stream removed, broadcasting: 3 I0107 14:12:33.002706 8 log.go:172] (0xc002ad9900) (1) Data frame sent I0107 14:12:33.002722 8 log.go:172] (0xc0024f54a0) (0xc002ad9900) Stream removed, broadcasting: 1 I0107 14:12:33.002747 8 log.go:172] (0xc0024f54a0) Go away received I0107 14:12:33.002984 8 log.go:172] (0xc0024f54a0) (0xc002ad9900) Stream removed, broadcasting: 1 I0107 14:12:33.002998 8 log.go:172] (0xc0024f54a0) (0xc001d4c3c0) Stream removed, broadcasting: 3 I0107 14:12:33.003003 8 log.go:172] (0xc0024f54a0) (0xc001c928c0) Stream removed, broadcasting: 5 Jan 7 14:12:33.003: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:12:33.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1334" for this suite. Jan 7 14:12:57.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:12:57.311: INFO: namespace pod-network-test-1334 deletion completed in 24.299511677s • [SLOW TEST:57.663 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:12:57.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Jan 7 14:13:06.075: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6550 pod-service-account-cab807ed-0a1a-40c8-ba8f-06d13289f313 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jan 7 14:13:08.649: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6550 pod-service-account-cab807ed-0a1a-40c8-ba8f-06d13289f313 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jan 7 14:13:09.221: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6550 pod-service-account-cab807ed-0a1a-40c8-ba8f-06d13289f313 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:13:09.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6550" for this suite. Jan 7 14:13:15.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:13:15.951: INFO: namespace svcaccounts-6550 deletion completed in 6.208892262s • [SLOW TEST:18.639 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:13:15.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 7 14:13:16.026: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f62b61e3-4b96-4192-b1c6-5219fe0c5e89" in namespace "projected-3095" to be "success or failure" Jan 7 14:13:16.033: INFO: Pod "downwardapi-volume-f62b61e3-4b96-4192-b1c6-5219fe0c5e89": Phase="Pending", Reason="", readiness=false. Elapsed: 7.267225ms Jan 7 14:13:18.044: INFO: Pod "downwardapi-volume-f62b61e3-4b96-4192-b1c6-5219fe0c5e89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01827079s Jan 7 14:13:20.052: INFO: Pod "downwardapi-volume-f62b61e3-4b96-4192-b1c6-5219fe0c5e89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026579472s Jan 7 14:13:22.062: INFO: Pod "downwardapi-volume-f62b61e3-4b96-4192-b1c6-5219fe0c5e89": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036247554s Jan 7 14:13:24.081: INFO: Pod "downwardapi-volume-f62b61e3-4b96-4192-b1c6-5219fe0c5e89": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054803835s Jan 7 14:13:26.113: INFO: Pod "downwardapi-volume-f62b61e3-4b96-4192-b1c6-5219fe0c5e89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087360646s STEP: Saw pod success Jan 7 14:13:26.113: INFO: Pod "downwardapi-volume-f62b61e3-4b96-4192-b1c6-5219fe0c5e89" satisfied condition "success or failure" Jan 7 14:13:26.119: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f62b61e3-4b96-4192-b1c6-5219fe0c5e89 container client-container: STEP: delete the pod Jan 7 14:13:26.278: INFO: Waiting for pod downwardapi-volume-f62b61e3-4b96-4192-b1c6-5219fe0c5e89 to disappear Jan 7 14:13:26.286: INFO: Pod downwardapi-volume-f62b61e3-4b96-4192-b1c6-5219fe0c5e89 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:13:26.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3095" for this suite. Jan 7 14:13:32.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:13:32.394: INFO: namespace projected-3095 deletion completed in 6.092297483s • [SLOW TEST:16.442 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:13:32.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 7 14:13:32.528: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-573,SelfLink:/api/v1/namespaces/watch-573/configmaps/e2e-watch-test-label-changed,UID:f9d05586-27c2-47e0-806a-49e1d5ad94ac,ResourceVersion:19658441,Generation:0,CreationTimestamp:2020-01-07 14:13:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 7 14:13:32.528: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-573,SelfLink:/api/v1/namespaces/watch-573/configmaps/e2e-watch-test-label-changed,UID:f9d05586-27c2-47e0-806a-49e1d5ad94ac,ResourceVersion:19658442,Generation:0,CreationTimestamp:2020-01-07 14:13:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 7 14:13:32.529: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-573,SelfLink:/api/v1/namespaces/watch-573/configmaps/e2e-watch-test-label-changed,UID:f9d05586-27c2-47e0-806a-49e1d5ad94ac,ResourceVersion:19658443,Generation:0,CreationTimestamp:2020-01-07 14:13:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 7 14:13:42.609: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-573,SelfLink:/api/v1/namespaces/watch-573/configmaps/e2e-watch-test-label-changed,UID:f9d05586-27c2-47e0-806a-49e1d5ad94ac,ResourceVersion:19658458,Generation:0,CreationTimestamp:2020-01-07 14:13:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 7 14:13:42.610: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-573,SelfLink:/api/v1/namespaces/watch-573/configmaps/e2e-watch-test-label-changed,UID:f9d05586-27c2-47e0-806a-49e1d5ad94ac,ResourceVersion:19658459,Generation:0,CreationTimestamp:2020-01-07 14:13:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 7 14:13:42.610: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-573,SelfLink:/api/v1/namespaces/watch-573/configmaps/e2e-watch-test-label-changed,UID:f9d05586-27c2-47e0-806a-49e1d5ad94ac,ResourceVersion:19658460,Generation:0,CreationTimestamp:2020-01-07 14:13:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:13:42.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-573" for this suite. Jan 7 14:13:48.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:13:48.825: INFO: namespace watch-573 deletion completed in 6.20758395s • [SLOW TEST:16.430 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:13:48.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-463e4cce-3d5a-4373-a1be-f926749062d1 STEP: Creating a pod to test consume secrets Jan 7 14:13:48.999: INFO: Waiting up to 5m0s for pod "pod-secrets-bc6cf847-c139-4778-847a-34df958c1e0b" in namespace "secrets-5621" to be "success or failure" Jan 7 14:13:49.008: INFO: Pod "pod-secrets-bc6cf847-c139-4778-847a-34df958c1e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.869612ms Jan 7 14:13:51.014: INFO: Pod "pod-secrets-bc6cf847-c139-4778-847a-34df958c1e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015070651s Jan 7 14:13:53.024: INFO: Pod "pod-secrets-bc6cf847-c139-4778-847a-34df958c1e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025164899s Jan 7 14:13:55.033: INFO: Pod "pod-secrets-bc6cf847-c139-4778-847a-34df958c1e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034027998s Jan 7 14:13:57.061: INFO: Pod "pod-secrets-bc6cf847-c139-4778-847a-34df958c1e0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062154998s STEP: Saw pod success Jan 7 14:13:57.062: INFO: Pod "pod-secrets-bc6cf847-c139-4778-847a-34df958c1e0b" satisfied condition "success or failure" Jan 7 14:13:57.079: INFO: Trying to get logs from node iruya-node pod pod-secrets-bc6cf847-c139-4778-847a-34df958c1e0b container secret-volume-test: STEP: delete the pod Jan 7 14:13:57.159: INFO: Waiting for pod pod-secrets-bc6cf847-c139-4778-847a-34df958c1e0b to disappear Jan 7 14:13:57.168: INFO: Pod pod-secrets-bc6cf847-c139-4778-847a-34df958c1e0b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:13:57.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5621" for this suite. Jan 7 14:14:03.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:14:03.343: INFO: namespace secrets-5621 deletion completed in 6.168752132s • [SLOW TEST:14.517 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:14:03.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:14:03.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7212" for this suite. Jan 7 14:14:25.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:14:25.734: INFO: namespace pods-7212 deletion completed in 22.260162846s • [SLOW TEST:22.391 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:14:25.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:14:33.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8168" for this suite. Jan 7 14:14:39.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:14:40.150: INFO: namespace kubelet-test-8168 deletion completed in 6.204320199s • [SLOW TEST:14.416 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:14:40.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 7 14:14:50.409: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-753dc326-e0b9-423f-a130-c2edbf08124e,GenerateName:,Namespace:events-7938,SelfLink:/api/v1/namespaces/events-7938/pods/send-events-753dc326-e0b9-423f-a130-c2edbf08124e,UID:45d514f2-46e4-4e0f-9e0b-d4381b1db964,ResourceVersion:19658629,Generation:0,CreationTimestamp:2020-01-07 14:14:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 295971907,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5dw27 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5dw27,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-5dw27 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030030d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030030f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:14:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:14:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:14:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:14:40 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-07 14:14:40 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-07 14:14:47 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://968131ced6baee46ea414e01622ca1c382bfd1c7e647a7dc01734cf64dc0ee5f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jan 7 14:14:52.417: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 7 14:14:54.429: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:14:54.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7938" for this suite. Jan 7 14:15:38.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:15:38.653: INFO: namespace events-7938 deletion completed in 44.176264911s • [SLOW TEST:58.502 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:15:38.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Jan 7 14:15:39.296: INFO: created pod pod-service-account-defaultsa Jan 7 14:15:39.296: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 7 14:15:39.326: INFO: created pod pod-service-account-mountsa Jan 7 14:15:39.326: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 7 14:15:39.350: INFO: created pod pod-service-account-nomountsa Jan 7 14:15:39.350: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 7 14:15:39.383: INFO: created pod pod-service-account-defaultsa-mountspec Jan 7 14:15:39.383: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 7 14:15:39.514: INFO: created pod pod-service-account-mountsa-mountspec Jan 7 14:15:39.515: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 7 14:15:39.565: INFO: created pod pod-service-account-nomountsa-mountspec Jan 7 14:15:39.565: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 7 14:15:40.160: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 7 14:15:40.160: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 7 14:15:40.744: INFO: created pod pod-service-account-mountsa-nomountspec Jan 7 14:15:40.744: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 7 14:15:40.821: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 7 14:15:40.822: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:15:40.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9377" for this suite. Jan 7 14:16:28.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:16:28.908: INFO: namespace svcaccounts-9377 deletion completed in 47.2839752s • [SLOW TEST:50.255 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:16:28.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 7 14:16:29.002: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a9dab642-bfb3-44d9-99a1-f5f13e42fa8d" in namespace "downward-api-4644" to be "success or failure" Jan 7 14:16:29.006: INFO: Pod "downwardapi-volume-a9dab642-bfb3-44d9-99a1-f5f13e42fa8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166522ms Jan 7 14:16:31.014: INFO: Pod "downwardapi-volume-a9dab642-bfb3-44d9-99a1-f5f13e42fa8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011866302s Jan 7 14:16:33.024: INFO: Pod "downwardapi-volume-a9dab642-bfb3-44d9-99a1-f5f13e42fa8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021975691s Jan 7 14:16:35.033: INFO: Pod "downwardapi-volume-a9dab642-bfb3-44d9-99a1-f5f13e42fa8d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030781849s Jan 7 14:16:37.042: INFO: Pod "downwardapi-volume-a9dab642-bfb3-44d9-99a1-f5f13e42fa8d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040162623s Jan 7 14:16:39.052: INFO: Pod "downwardapi-volume-a9dab642-bfb3-44d9-99a1-f5f13e42fa8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.049970433s STEP: Saw pod success Jan 7 14:16:39.052: INFO: Pod "downwardapi-volume-a9dab642-bfb3-44d9-99a1-f5f13e42fa8d" satisfied condition "success or failure" Jan 7 14:16:39.057: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a9dab642-bfb3-44d9-99a1-f5f13e42fa8d container client-container: STEP: delete the pod Jan 7 14:16:39.141: INFO: Waiting for pod downwardapi-volume-a9dab642-bfb3-44d9-99a1-f5f13e42fa8d to disappear Jan 7 14:16:39.202: INFO: Pod downwardapi-volume-a9dab642-bfb3-44d9-99a1-f5f13e42fa8d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:16:39.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4644" for this suite. Jan 7 14:16:45.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:16:45.404: INFO: namespace downward-api-4644 deletion completed in 6.193874516s • [SLOW TEST:16.495 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:16:45.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 7 14:16:45.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8454' Jan 7 14:16:45.670: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 7 14:16:45.671: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Jan 7 14:16:45.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-8454' Jan 7 14:16:45.958: INFO: stderr: "" Jan 7 14:16:45.959: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:16:45.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8454" for this suite. Jan 7 14:16:52.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:16:52.135: INFO: namespace kubectl-8454 deletion completed in 6.157292094s • [SLOW TEST:6.731 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:16:52.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-171e4502-24d1-44fa-aa01-95f177847e65 STEP: Creating a pod to test consume configMaps Jan 7 14:16:52.410: INFO: Waiting up to 5m0s for pod "pod-configmaps-e77ac9a3-e24e-4fea-aca0-9291e22d11b8" in namespace "configmap-5623" to be "success or failure" Jan 7 14:16:52.461: INFO: Pod "pod-configmaps-e77ac9a3-e24e-4fea-aca0-9291e22d11b8": Phase="Pending", Reason="", readiness=false. Elapsed: 50.198377ms Jan 7 14:16:54.472: INFO: Pod "pod-configmaps-e77ac9a3-e24e-4fea-aca0-9291e22d11b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061264535s Jan 7 14:16:56.481: INFO: Pod "pod-configmaps-e77ac9a3-e24e-4fea-aca0-9291e22d11b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070447649s Jan 7 14:16:58.496: INFO: Pod "pod-configmaps-e77ac9a3-e24e-4fea-aca0-9291e22d11b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084906663s Jan 7 14:17:00.511: INFO: Pod "pod-configmaps-e77ac9a3-e24e-4fea-aca0-9291e22d11b8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099680802s Jan 7 14:17:02.524: INFO: Pod "pod-configmaps-e77ac9a3-e24e-4fea-aca0-9291e22d11b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.112930228s STEP: Saw pod success Jan 7 14:17:02.524: INFO: Pod "pod-configmaps-e77ac9a3-e24e-4fea-aca0-9291e22d11b8" satisfied condition "success or failure" Jan 7 14:17:02.532: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e77ac9a3-e24e-4fea-aca0-9291e22d11b8 container configmap-volume-test: STEP: delete the pod Jan 7 14:17:02.681: INFO: Waiting for pod pod-configmaps-e77ac9a3-e24e-4fea-aca0-9291e22d11b8 to disappear Jan 7 14:17:02.744: INFO: Pod pod-configmaps-e77ac9a3-e24e-4fea-aca0-9291e22d11b8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:17:02.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5623" for this suite. Jan 7 14:17:08.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:17:08.982: INFO: namespace configmap-5623 deletion completed in 6.224604831s • [SLOW TEST:16.845 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:17:08.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 7 14:17:09.130: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85f9bb83-b4b9-44f6-a06a-6ee0cdab304f" in namespace "downward-api-4" to be "success or failure" Jan 7 14:17:09.143: INFO: Pod "downwardapi-volume-85f9bb83-b4b9-44f6-a06a-6ee0cdab304f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.5141ms Jan 7 14:17:11.154: INFO: Pod "downwardapi-volume-85f9bb83-b4b9-44f6-a06a-6ee0cdab304f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024024369s Jan 7 14:17:13.162: INFO: Pod "downwardapi-volume-85f9bb83-b4b9-44f6-a06a-6ee0cdab304f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032081107s Jan 7 14:17:15.175: INFO: Pod "downwardapi-volume-85f9bb83-b4b9-44f6-a06a-6ee0cdab304f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044883072s Jan 7 14:17:17.185: INFO: Pod "downwardapi-volume-85f9bb83-b4b9-44f6-a06a-6ee0cdab304f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054556594s Jan 7 14:17:19.194: INFO: Pod "downwardapi-volume-85f9bb83-b4b9-44f6-a06a-6ee0cdab304f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064285958s STEP: Saw pod success Jan 7 14:17:19.195: INFO: Pod "downwardapi-volume-85f9bb83-b4b9-44f6-a06a-6ee0cdab304f" satisfied condition "success or failure" Jan 7 14:17:19.199: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-85f9bb83-b4b9-44f6-a06a-6ee0cdab304f container client-container: STEP: delete the pod Jan 7 14:17:19.466: INFO: Waiting for pod downwardapi-volume-85f9bb83-b4b9-44f6-a06a-6ee0cdab304f to disappear Jan 7 14:17:19.512: INFO: Pod downwardapi-volume-85f9bb83-b4b9-44f6-a06a-6ee0cdab304f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:17:19.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4" for this suite. Jan 7 14:17:25.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:17:25.731: INFO: namespace downward-api-4 deletion completed in 6.212807192s • [SLOW TEST:16.748 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:17:25.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:17:36.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4879" for this suite. Jan 7 14:17:42.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:17:42.354: INFO: namespace emptydir-wrapper-4879 deletion completed in 6.240658871s • [SLOW TEST:16.622 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:17:42.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-kzn2 STEP: Creating a pod to test atomic-volume-subpath Jan 7 14:17:42.506: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kzn2" in namespace "subpath-1774" to be "success or failure" Jan 7 14:17:42.512: INFO: Pod "pod-subpath-test-configmap-kzn2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.069166ms Jan 7 14:17:44.524: INFO: Pod "pod-subpath-test-configmap-kzn2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017862864s Jan 7 14:17:46.531: INFO: Pod "pod-subpath-test-configmap-kzn2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024737404s Jan 7 14:17:48.546: INFO: Pod "pod-subpath-test-configmap-kzn2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039826798s Jan 7 14:17:50.562: INFO: Pod "pod-subpath-test-configmap-kzn2": Phase="Running", Reason="", readiness=true. Elapsed: 8.05590734s Jan 7 14:17:52.576: INFO: Pod "pod-subpath-test-configmap-kzn2": Phase="Running", Reason="", readiness=true. Elapsed: 10.069345401s Jan 7 14:17:54.582: INFO: Pod "pod-subpath-test-configmap-kzn2": Phase="Running", Reason="", readiness=true. Elapsed: 12.075270878s Jan 7 14:17:56.595: INFO: Pod "pod-subpath-test-configmap-kzn2": Phase="Running", Reason="", readiness=true. Elapsed: 14.088378167s Jan 7 14:17:58.607: INFO: Pod "pod-subpath-test-configmap-kzn2": Phase="Running", Reason="", readiness=true. Elapsed: 16.100793702s Jan 7 14:18:00.630: INFO: Pod "pod-subpath-test-configmap-kzn2": Phase="Running", Reason="", readiness=true. Elapsed: 18.123567883s Jan 7 14:18:02.641: INFO: Pod "pod-subpath-test-configmap-kzn2": Phase="Running", Reason="", readiness=true. Elapsed: 20.134437331s Jan 7 14:18:04.650: INFO: Pod "pod-subpath-test-configmap-kzn2": Phase="Running", Reason="", readiness=true. Elapsed: 22.143205531s Jan 7 14:18:06.708: INFO: Pod "pod-subpath-test-configmap-kzn2": Phase="Running", Reason="", readiness=true. Elapsed: 24.201843668s Jan 7 14:18:08.721: INFO: Pod "pod-subpath-test-configmap-kzn2": Phase="Running", Reason="", readiness=true. Elapsed: 26.214190809s Jan 7 14:18:10.730: INFO: Pod "pod-subpath-test-configmap-kzn2": Phase="Running", Reason="", readiness=true. Elapsed: 28.223836224s Jan 7 14:18:12.738: INFO: Pod "pod-subpath-test-configmap-kzn2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.23129423s STEP: Saw pod success Jan 7 14:18:12.738: INFO: Pod "pod-subpath-test-configmap-kzn2" satisfied condition "success or failure" Jan 7 14:18:12.742: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-kzn2 container test-container-subpath-configmap-kzn2: STEP: delete the pod Jan 7 14:18:12.791: INFO: Waiting for pod pod-subpath-test-configmap-kzn2 to disappear Jan 7 14:18:12.797: INFO: Pod pod-subpath-test-configmap-kzn2 no longer exists STEP: Deleting pod pod-subpath-test-configmap-kzn2 Jan 7 14:18:12.797: INFO: Deleting pod "pod-subpath-test-configmap-kzn2" in namespace "subpath-1774" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:18:12.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1774" for this suite. Jan 7 14:18:18.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:18:19.031: INFO: namespace subpath-1774 deletion completed in 6.224548845s • [SLOW TEST:36.677 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:18:19.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 7 14:18:27.894: INFO: Successfully updated pod "annotationupdate6c9f8f34-f3e5-4fae-b48a-0aed342e441f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:18:30.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5824" for this suite. Jan 7 14:18:52.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:18:52.264: INFO: namespace projected-5824 deletion completed in 22.206774877s • [SLOW TEST:33.233 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:18:52.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 7 14:18:52.454: INFO: Waiting up to 5m0s for pod "pod-10736706-8ed4-4fdc-8c8a-8058bef9fead" in namespace "emptydir-3907" to be "success or failure" Jan 7 14:18:52.467: INFO: Pod "pod-10736706-8ed4-4fdc-8c8a-8058bef9fead": Phase="Pending", Reason="", readiness=false. Elapsed: 13.349603ms Jan 7 14:18:54.477: INFO: Pod "pod-10736706-8ed4-4fdc-8c8a-8058bef9fead": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022509303s Jan 7 14:18:56.491: INFO: Pod "pod-10736706-8ed4-4fdc-8c8a-8058bef9fead": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036858005s Jan 7 14:18:58.505: INFO: Pod "pod-10736706-8ed4-4fdc-8c8a-8058bef9fead": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050705655s Jan 7 14:19:00.557: INFO: Pod "pod-10736706-8ed4-4fdc-8c8a-8058bef9fead": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102833119s STEP: Saw pod success Jan 7 14:19:00.557: INFO: Pod "pod-10736706-8ed4-4fdc-8c8a-8058bef9fead" satisfied condition "success or failure" Jan 7 14:19:00.562: INFO: Trying to get logs from node iruya-node pod pod-10736706-8ed4-4fdc-8c8a-8058bef9fead container test-container: STEP: delete the pod Jan 7 14:19:00.629: INFO: Waiting for pod pod-10736706-8ed4-4fdc-8c8a-8058bef9fead to disappear Jan 7 14:19:00.634: INFO: Pod pod-10736706-8ed4-4fdc-8c8a-8058bef9fead no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:19:00.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3907" for this suite. Jan 7 14:19:06.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:19:06.890: INFO: namespace emptydir-3907 deletion completed in 6.247970815s • [SLOW TEST:14.625 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:19:06.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 7 14:19:06.959: INFO: Creating deployment "nginx-deployment" Jan 7 14:19:07.029: INFO: Waiting for observed generation 1 Jan 7 14:19:09.755: INFO: Waiting for all required pods to come up Jan 7 14:19:10.333: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 7 14:19:32.796: INFO: Waiting for deployment "nginx-deployment" to complete Jan 7 14:19:32.808: INFO: Updating deployment "nginx-deployment" with a non-existent image Jan 7 14:19:32.824: INFO: Updating deployment nginx-deployment Jan 7 14:19:32.824: INFO: Waiting for observed generation 2 Jan 7 14:19:36.513: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 7 14:19:36.535: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 7 14:19:36.729: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 7 14:19:36.818: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 7 14:19:36.818: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 7 14:19:36.822: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 7 14:19:36.931: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jan 7 14:19:36.931: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jan 7 14:19:36.943: INFO: Updating deployment nginx-deployment Jan 7 14:19:36.943: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jan 7 14:19:38.485: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 7 14:19:38.534: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 7 14:19:44.568: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-5285,SelfLink:/apis/apps/v1/namespaces/deployment-5285/deployments/nginx-deployment,UID:54a8136d-786f-4b0b-83b3-f2fd80d1ecbe,ResourceVersion:19659555,Generation:3,CreationTimestamp:2020-01-07 14:19:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-07 14:19:36 +0000 UTC 2020-01-07 14:19:07 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-01-07 14:19:37 +0000 UTC 2020-01-07 14:19:37 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Jan 7 14:19:45.061: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-5285,SelfLink:/apis/apps/v1/namespaces/deployment-5285/replicasets/nginx-deployment-55fb7cb77f,UID:975612d5-8d56-4ed7-96b4-f0eb8259666a,ResourceVersion:19659553,Generation:3,CreationTimestamp:2020-01-07 14:19:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 54a8136d-786f-4b0b-83b3-f2fd80d1ecbe 0xc0029adcc7 0xc0029adcc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 7 14:19:45.062: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jan 7 14:19:45.062: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-5285,SelfLink:/apis/apps/v1/namespaces/deployment-5285/replicasets/nginx-deployment-7b8c6f4498,UID:eb90d0f7-74de-4410-8c88-e6967be8e5d9,ResourceVersion:19659549,Generation:3,CreationTimestamp:2020-01-07 14:19:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 54a8136d-786f-4b0b-83b3-f2fd80d1ecbe 0xc0029add97 0xc0029add98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jan 7 14:19:46.708: INFO: Pod "nginx-deployment-55fb7cb77f-4pqg9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4pqg9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-55fb7cb77f-4pqg9,UID:fb0dfe96-2df0-459b-a573-b3d31cfa1f33,ResourceVersion:19659466,Generation:0,CreationTimestamp:2020-01-07 14:19:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 975612d5-8d56-4ed7-96b4-f0eb8259666a 0xc002928717 0xc002928718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002928790} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029287b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-07 14:19:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.709: INFO: Pod "nginx-deployment-55fb7cb77f-5sqdv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5sqdv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-55fb7cb77f-5sqdv,UID:ca77d3e8-0616-4418-aa6d-915161ac2f2e,ResourceVersion:19659551,Generation:0,CreationTimestamp:2020-01-07 14:19:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 975612d5-8d56-4ed7-96b4-f0eb8259666a 0xc002928887 0xc002928888}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002928900} {node.kubernetes.io/unreachable Exists NoExecute 0xc002928920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:41 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.710: INFO: Pod "nginx-deployment-55fb7cb77f-7swnp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7swnp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-55fb7cb77f-7swnp,UID:08b5d7e2-8b91-4ed1-bc4a-1094fd7b030a,ResourceVersion:19659546,Generation:0,CreationTimestamp:2020-01-07 14:19:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 975612d5-8d56-4ed7-96b4-f0eb8259666a 0xc0029289a7 0xc0029289a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002928a10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002928a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.710: INFO: Pod "nginx-deployment-55fb7cb77f-9rkkb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9rkkb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-55fb7cb77f-9rkkb,UID:a023d77d-3533-40e8-9219-3c53377d0e1a,ResourceVersion:19659513,Generation:0,CreationTimestamp:2020-01-07 14:19:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 975612d5-8d56-4ed7-96b4-f0eb8259666a 0xc002928ab7 0xc002928ab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002928b20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002928b40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.710: INFO: Pod "nginx-deployment-55fb7cb77f-drgb7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-drgb7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-55fb7cb77f-drgb7,UID:d0b2539b-40c4-4991-88c5-7b6d8522ea8e,ResourceVersion:19659533,Generation:0,CreationTimestamp:2020-01-07 14:19:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 975612d5-8d56-4ed7-96b4-f0eb8259666a 0xc002928bc7 0xc002928bc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002928c40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002928c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.710: INFO: Pod "nginx-deployment-55fb7cb77f-jqbh6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jqbh6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-55fb7cb77f-jqbh6,UID:6765d64c-9c2e-4d2e-bbfc-ace26e989ac9,ResourceVersion:19659542,Generation:0,CreationTimestamp:2020-01-07 14:19:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 975612d5-8d56-4ed7-96b4-f0eb8259666a 0xc002928ce7 0xc002928ce8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002928d60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002928d80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.711: INFO: Pod "nginx-deployment-55fb7cb77f-lbnfk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lbnfk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-55fb7cb77f-lbnfk,UID:c25c65af-382d-45e5-b829-f7a897a70d73,ResourceVersion:19659545,Generation:0,CreationTimestamp:2020-01-07 14:19:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 975612d5-8d56-4ed7-96b4-f0eb8259666a 0xc002928e07 0xc002928e08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002928e70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002928e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.711: INFO: Pod "nginx-deployment-55fb7cb77f-mkhr9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mkhr9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-55fb7cb77f-mkhr9,UID:1fa0e87b-c502-4f1c-b7e0-282b0f4b800f,ResourceVersion:19659467,Generation:0,CreationTimestamp:2020-01-07 14:19:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 975612d5-8d56-4ed7-96b4-f0eb8259666a 0xc002928f17 0xc002928f18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002928f80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002928fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-07 14:19:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.711: INFO: Pod "nginx-deployment-55fb7cb77f-rsfb2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rsfb2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-55fb7cb77f-rsfb2,UID:1554e6ce-df09-443e-90b7-7eb8954f1a14,ResourceVersion:19659490,Generation:0,CreationTimestamp:2020-01-07 14:19:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 975612d5-8d56-4ed7-96b4-f0eb8259666a 0xc002929077 0xc002929078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029290f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002929110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-07 14:19:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.711: INFO: Pod "nginx-deployment-55fb7cb77f-s6zgw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-s6zgw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-55fb7cb77f-s6zgw,UID:d042b207-451d-4d07-8950-ef4c91d4fa09,ResourceVersion:19659481,Generation:0,CreationTimestamp:2020-01-07 14:19:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 975612d5-8d56-4ed7-96b4-f0eb8259666a 0xc0029291e7 0xc0029291e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002929260} {node.kubernetes.io/unreachable Exists NoExecute 0xc002929280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-07 14:19:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.711: INFO: Pod "nginx-deployment-55fb7cb77f-tq2r4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tq2r4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-55fb7cb77f-tq2r4,UID:605b55a2-24f4-44be-a209-147a371718fd,ResourceVersion:19659520,Generation:0,CreationTimestamp:2020-01-07 14:19:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 975612d5-8d56-4ed7-96b4-f0eb8259666a 0xc002929357 0xc002929358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029293d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029293f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.712: INFO: Pod "nginx-deployment-55fb7cb77f-x75pd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-x75pd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-55fb7cb77f-x75pd,UID:5ff59273-5edb-441e-b86f-2dd6d1069bd9,ResourceVersion:19659491,Generation:0,CreationTimestamp:2020-01-07 14:19:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 975612d5-8d56-4ed7-96b4-f0eb8259666a 0xc002929477 0xc002929478}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029294e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002929500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-07 14:19:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.712: INFO: Pod "nginx-deployment-55fb7cb77f-znn8h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-znn8h,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-55fb7cb77f-znn8h,UID:bcb5aee7-1bd0-43bf-8b96-dbef895cc4be,ResourceVersion:19659543,Generation:0,CreationTimestamp:2020-01-07 14:19:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 975612d5-8d56-4ed7-96b4-f0eb8259666a 0xc0029295d7 0xc0029295d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002929640} {node.kubernetes.io/unreachable Exists NoExecute 0xc002929660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.713: INFO: Pod "nginx-deployment-7b8c6f4498-27r6g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-27r6g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-27r6g,UID:e80cb702-a8d1-424a-9bc0-4d087462469b,ResourceVersion:19659538,Generation:0,CreationTimestamp:2020-01-07 14:19:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc0029296e7 0xc0029296e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002929760} {node.kubernetes.io/unreachable Exists NoExecute 0xc002929780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.713: INFO: Pod "nginx-deployment-7b8c6f4498-68m6c" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-68m6c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-68m6c,UID:79a67343-c1de-4fc5-9f38-b0e754c47fa2,ResourceVersion:19659414,Generation:0,CreationTimestamp:2020-01-07 14:19:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc002929807 0xc002929808}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002929880} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029298a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-01-07 14:19:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-07 14:19:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6c928d8524a5c54ad4004f4d0e7ac96f422888d2559c09843c6263772009c250}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.713: INFO: Pod "nginx-deployment-7b8c6f4498-6b244" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6b244,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-6b244,UID:f255f9e2-2027-4e2f-a69f-126120b251ed,ResourceVersion:19659541,Generation:0,CreationTimestamp:2020-01-07 14:19:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc002929987 0xc002929988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029299f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002929a10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.713: INFO: Pod "nginx-deployment-7b8c6f4498-74kr2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-74kr2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-74kr2,UID:12d1c0c0-e85b-40a1-8157-35da5d77fda7,ResourceVersion:19659521,Generation:0,CreationTimestamp:2020-01-07 14:19:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc002929a97 0xc002929a98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002929b00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002929b20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.713: INFO: Pod "nginx-deployment-7b8c6f4498-8ks25" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8ks25,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-8ks25,UID:d55265ad-f032-43bc-a620-436b6ebc0e7a,ResourceVersion:19659534,Generation:0,CreationTimestamp:2020-01-07 14:19:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc002929ba7 0xc002929ba8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002929c20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002929c40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.714: INFO: Pod "nginx-deployment-7b8c6f4498-9f4tl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9f4tl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-9f4tl,UID:b3351c3c-8c1b-483b-bb77-34154efced88,ResourceVersion:19659431,Generation:0,CreationTimestamp:2020-01-07 14:19:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc002929cc7 0xc002929cc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002929d30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002929d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-01-07 14:19:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-07 14:19:30 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://758a82cfc3b864abf28133be3923a3f43c4c6706c983fcb479def9a25eee3a61}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.714: INFO: Pod "nginx-deployment-7b8c6f4498-d5h5g" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-d5h5g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-d5h5g,UID:597e80ce-60e6-48b5-b165-0f23c191b503,ResourceVersion:19659433,Generation:0,CreationTimestamp:2020-01-07 14:19:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc002929e27 0xc002929e28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002929e90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002929eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-07 14:19:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-07 14:19:30 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://cb2faf8d1c605bd317d6f00b5ca9cba5dacadf9818f3c2eb368de74128cdd886}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.714: INFO: Pod "nginx-deployment-7b8c6f4498-dgr5w" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dgr5w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-dgr5w,UID:34890d74-d773-44d6-9ef9-360c8e963c8d,ResourceVersion:19659405,Generation:0,CreationTimestamp:2020-01-07 14:19:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc002929f87 0xc002929f88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b28000} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b28020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-01-07 14:19:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-07 14:19:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8e12ba931977d8284ef2fc20175d0e3dca18f9e12ab11e88742f13e2de0b048c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.714: INFO: Pod "nginx-deployment-7b8c6f4498-fc457" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fc457,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-fc457,UID:b10908a7-feaa-444f-8b14-d793636b27ba,ResourceVersion:19659565,Generation:0,CreationTimestamp:2020-01-07 14:19:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc002b280f7 0xc002b280f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b28170} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b28190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:38 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-07 14:19:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.714: INFO: Pod "nginx-deployment-7b8c6f4498-fdwwl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fdwwl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-fdwwl,UID:e52d6dbb-e0db-4203-8ee7-e7d3356d04f1,ResourceVersion:19659523,Generation:0,CreationTimestamp:2020-01-07 14:19:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc002b28257 0xc002b28258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b282c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b282e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.714: INFO: Pod "nginx-deployment-7b8c6f4498-ffwwv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ffwwv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-ffwwv,UID:86040fd7-8d70-4762-8c14-a52c3ab420ed,ResourceVersion:19659396,Generation:0,CreationTimestamp:2020-01-07 14:19:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc002b28367 0xc002b28368}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b283e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b28400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-07 14:19:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-07 14:19:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d788b26ec475dea0618697cf737d2d829e9615424f80eb80e6db8f4abd6b0735}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.715: INFO: Pod "nginx-deployment-7b8c6f4498-gvsth" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gvsth,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-gvsth,UID:00d1b568-155e-4945-b85c-1e3f0261c529,ResourceVersion:19659547,Generation:0,CreationTimestamp:2020-01-07 14:19:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc002b284d7 0xc002b284d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b28550} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b28570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:38 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-07 14:19:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.715: INFO: Pod "nginx-deployment-7b8c6f4498-l24ss" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l24ss,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-l24ss,UID:53115b96-fe8a-434d-ae7e-8e0daa0470b8,ResourceVersion:19659544,Generation:0,CreationTimestamp:2020-01-07 14:19:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc002b28637 0xc002b28638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b286a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b286c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.715: INFO: Pod "nginx-deployment-7b8c6f4498-ld58d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ld58d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-ld58d,UID:f4b97a83-9937-4092-bd99-d567583a728f,ResourceVersion:19659552,Generation:0,CreationTimestamp:2020-01-07 14:19:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc002b28747 0xc002b28748}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b287b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b287d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:38 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-07 14:19:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.715: INFO: Pod "nginx-deployment-7b8c6f4498-lk9b7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lk9b7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-lk9b7,UID:5330a655-195e-404f-80c4-b9bed22d7b73,ResourceVersion:19659411,Generation:0,CreationTimestamp:2020-01-07 14:19:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc002b28897 0xc002b28898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b28910} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b28940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-01-07 14:19:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-07 14:19:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c9d7cc9f951bf6fed3a1fdf5342f7c8648507087c2634f28bb0d9f531854e224}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.715: INFO: Pod "nginx-deployment-7b8c6f4498-lvvck" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lvvck,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-lvvck,UID:3bba7dec-b9a7-4ed6-a38b-a877f0e2217d,ResourceVersion:19659539,Generation:0,CreationTimestamp:2020-01-07 14:19:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc002b28a17 0xc002b28a18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b28a90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b28ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.716: INFO: Pod "nginx-deployment-7b8c6f4498-lxkwr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lxkwr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-lxkwr,UID:50431c6d-1e7a-4a3b-9d51-e0cea466d033,ResourceVersion:19659424,Generation:0,CreationTimestamp:2020-01-07 14:19:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc002b28b37 0xc002b28b38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b28bc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b28be0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2020-01-07 14:19:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-07 14:19:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f519906317bd19d63b85cff035b125474b216faad7847c3812fe5b2169e0652a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.716: INFO: Pod "nginx-deployment-7b8c6f4498-phrdr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-phrdr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-phrdr,UID:5acccb65-5562-47d5-bc28-0b8e6c7b021b,ResourceVersion:19659399,Generation:0,CreationTimestamp:2020-01-07 14:19:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc002b28cb7 0xc002b28cb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b28d30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b28d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-07 14:19:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-07 14:19:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0727836f8e87a2469b0a2a6c58805852fc290028a8cc66db33debb31432408c3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.716: INFO: Pod "nginx-deployment-7b8c6f4498-t77s9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t77s9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-t77s9,UID:634ac7b8-f11d-4fbf-a616-350f615e8716,ResourceVersion:19659537,Generation:0,CreationTimestamp:2020-01-07 14:19:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc002b28e27 0xc002b28e28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b28ea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b28ec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 7 14:19:46.716: INFO: Pod "nginx-deployment-7b8c6f4498-xrw5d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xrw5d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5285,SelfLink:/api/v1/namespaces/deployment-5285/pods/nginx-deployment-7b8c6f4498-xrw5d,UID:83e5aef5-ce07-48fe-b947-cb8b053a06ec,ResourceVersion:19659561,Generation:0,CreationTimestamp:2020-01-07 14:19:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 eb90d0f7-74de-4410-8c88-e6967be8e5d9 0xc002b28f47 0xc002b28f48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dckr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dckr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dckr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b28fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b28fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 14:19:38 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-07 14:19:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:19:46.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5285" for this suite. Jan 7 14:21:17.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:21:17.480: INFO: namespace deployment-5285 deletion completed in 1m29.434024124s • [SLOW TEST:130.589 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:21:17.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 7 14:21:17.657: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 7 14:21:17.670: INFO: Number of nodes with available pods: 0 Jan 7 14:21:17.670: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 7 14:21:17.823: INFO: Number of nodes with available pods: 0 Jan 7 14:21:17.823: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:18.829: INFO: Number of nodes with available pods: 0 Jan 7 14:21:18.829: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:19.831: INFO: Number of nodes with available pods: 0 Jan 7 14:21:19.831: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:20.836: INFO: Number of nodes with available pods: 0 Jan 7 14:21:20.836: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:21.831: INFO: Number of nodes with available pods: 0 Jan 7 14:21:21.831: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:22.848: INFO: Number of nodes with available pods: 0 Jan 7 14:21:22.848: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:23.833: INFO: Number of nodes with available pods: 0 Jan 7 14:21:23.833: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:24.836: INFO: Number of nodes with available pods: 0 Jan 7 14:21:24.836: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:25.832: INFO: Number of nodes with available pods: 0 Jan 7 14:21:25.832: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:26.833: INFO: Number of nodes with available pods: 0 Jan 7 14:21:26.833: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:27.833: INFO: Number of nodes with available pods: 0 Jan 7 14:21:27.833: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:28.838: INFO: Number of nodes with available pods: 1 Jan 7 14:21:28.838: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 7 14:21:28.906: INFO: Number of nodes with available pods: 1 Jan 7 14:21:28.906: INFO: Number of running nodes: 0, number of available pods: 1 Jan 7 14:21:29.919: INFO: Number of nodes with available pods: 0 Jan 7 14:21:29.919: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 7 14:21:29.967: INFO: Number of nodes with available pods: 0 Jan 7 14:21:29.967: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:30.975: INFO: Number of nodes with available pods: 0 Jan 7 14:21:30.975: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:31.978: INFO: Number of nodes with available pods: 0 Jan 7 14:21:31.979: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:32.979: INFO: Number of nodes with available pods: 0 Jan 7 14:21:32.979: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:33.985: INFO: Number of nodes with available pods: 0 Jan 7 14:21:33.985: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:34.983: INFO: Number of nodes with available pods: 0 Jan 7 14:21:34.984: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:35.983: INFO: Number of nodes with available pods: 0 Jan 7 14:21:35.983: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:36.978: INFO: Number of nodes with available pods: 0 Jan 7 14:21:36.978: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:37.977: INFO: Number of nodes with available pods: 0 Jan 7 14:21:37.977: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:38.976: INFO: Number of nodes with available pods: 0 Jan 7 14:21:38.976: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:39.978: INFO: Number of nodes with available pods: 0 Jan 7 14:21:39.978: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:40.982: INFO: Number of nodes with available pods: 0 Jan 7 14:21:40.982: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:41.976: INFO: Number of nodes with available pods: 0 Jan 7 14:21:41.976: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:42.976: INFO: Number of nodes with available pods: 0 Jan 7 14:21:42.976: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:43.975: INFO: Number of nodes with available pods: 0 Jan 7 14:21:43.975: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:44.984: INFO: Number of nodes with available pods: 0 Jan 7 14:21:44.984: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:45.978: INFO: Number of nodes with available pods: 0 Jan 7 14:21:45.978: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:46.990: INFO: Number of nodes with available pods: 0 Jan 7 14:21:46.991: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:47.979: INFO: Number of nodes with available pods: 0 Jan 7 14:21:47.979: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:48.976: INFO: Number of nodes with available pods: 0 Jan 7 14:21:48.976: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:49.979: INFO: Number of nodes with available pods: 0 Jan 7 14:21:49.979: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:50.977: INFO: Number of nodes with available pods: 0 Jan 7 14:21:50.977: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:51.980: INFO: Number of nodes with available pods: 0 Jan 7 14:21:51.980: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:52.973: INFO: Number of nodes with available pods: 0 Jan 7 14:21:52.973: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:53.978: INFO: Number of nodes with available pods: 0 Jan 7 14:21:53.978: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:21:55.085: INFO: Number of nodes with available pods: 1 Jan 7 14:21:55.085: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7994, will wait for the garbage collector to delete the pods Jan 7 14:21:55.176: INFO: Deleting DaemonSet.extensions daemon-set took: 9.517572ms Jan 7 14:21:55.577: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.714796ms Jan 7 14:22:06.612: INFO: Number of nodes with available pods: 0 Jan 7 14:22:06.613: INFO: Number of running nodes: 0, number of available pods: 0 Jan 7 14:22:06.619: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7994/daemonsets","resourceVersion":"19660016"},"items":null} Jan 7 14:22:06.622: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7994/pods","resourceVersion":"19660016"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:22:06.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7994" for this suite. Jan 7 14:22:12.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:22:12.867: INFO: namespace daemonsets-7994 deletion completed in 6.190410919s • [SLOW TEST:55.388 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:22:12.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-717924fb-47c8-472b-a997-cb40e73ff169 STEP: Creating a pod to test consume configMaps Jan 7 14:22:12.995: INFO: Waiting up to 5m0s for pod "pod-configmaps-b36c1425-c8b7-4fd9-86ec-07388cd78067" in namespace "configmap-5025" to be "success or failure" Jan 7 14:22:13.017: INFO: Pod "pod-configmaps-b36c1425-c8b7-4fd9-86ec-07388cd78067": Phase="Pending", Reason="", readiness=false. Elapsed: 21.511116ms Jan 7 14:22:15.025: INFO: Pod "pod-configmaps-b36c1425-c8b7-4fd9-86ec-07388cd78067": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030191855s Jan 7 14:22:17.032: INFO: Pod "pod-configmaps-b36c1425-c8b7-4fd9-86ec-07388cd78067": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037035798s Jan 7 14:22:19.038: INFO: Pod "pod-configmaps-b36c1425-c8b7-4fd9-86ec-07388cd78067": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043155202s Jan 7 14:22:21.059: INFO: Pod "pod-configmaps-b36c1425-c8b7-4fd9-86ec-07388cd78067": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06394475s STEP: Saw pod success Jan 7 14:22:21.060: INFO: Pod "pod-configmaps-b36c1425-c8b7-4fd9-86ec-07388cd78067" satisfied condition "success or failure" Jan 7 14:22:21.074: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b36c1425-c8b7-4fd9-86ec-07388cd78067 container configmap-volume-test: STEP: delete the pod Jan 7 14:22:21.229: INFO: Waiting for pod pod-configmaps-b36c1425-c8b7-4fd9-86ec-07388cd78067 to disappear Jan 7 14:22:21.298: INFO: Pod pod-configmaps-b36c1425-c8b7-4fd9-86ec-07388cd78067 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:22:21.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5025" for this suite. Jan 7 14:22:27.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:22:27.459: INFO: namespace configmap-5025 deletion completed in 6.155039972s • [SLOW TEST:14.591 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:22:27.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 7 14:22:27.636: INFO: Waiting up to 5m0s for pod "pod-cc1dcd2a-63f8-4262-921c-0c002b7d0039" in namespace "emptydir-6359" to be "success or failure" Jan 7 14:22:27.666: INFO: Pod "pod-cc1dcd2a-63f8-4262-921c-0c002b7d0039": Phase="Pending", Reason="", readiness=false. Elapsed: 29.088829ms Jan 7 14:22:29.675: INFO: Pod "pod-cc1dcd2a-63f8-4262-921c-0c002b7d0039": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03824785s Jan 7 14:22:31.684: INFO: Pod "pod-cc1dcd2a-63f8-4262-921c-0c002b7d0039": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047791813s Jan 7 14:22:33.696: INFO: Pod "pod-cc1dcd2a-63f8-4262-921c-0c002b7d0039": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059050956s Jan 7 14:22:35.702: INFO: Pod "pod-cc1dcd2a-63f8-4262-921c-0c002b7d0039": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065504368s STEP: Saw pod success Jan 7 14:22:35.702: INFO: Pod "pod-cc1dcd2a-63f8-4262-921c-0c002b7d0039" satisfied condition "success or failure" Jan 7 14:22:35.705: INFO: Trying to get logs from node iruya-node pod pod-cc1dcd2a-63f8-4262-921c-0c002b7d0039 container test-container: STEP: delete the pod Jan 7 14:22:35.810: INFO: Waiting for pod pod-cc1dcd2a-63f8-4262-921c-0c002b7d0039 to disappear Jan 7 14:22:35.822: INFO: Pod pod-cc1dcd2a-63f8-4262-921c-0c002b7d0039 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:22:35.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6359" for this suite. Jan 7 14:22:41.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:22:42.145: INFO: namespace emptydir-6359 deletion completed in 6.21317208s • [SLOW TEST:14.686 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:22:42.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 7 14:22:42.238: INFO: Creating ReplicaSet my-hostname-basic-1a420144-49a5-4630-b629-57a3a32a827d Jan 7 14:22:42.321: INFO: Pod name my-hostname-basic-1a420144-49a5-4630-b629-57a3a32a827d: Found 0 pods out of 1 Jan 7 14:22:47.331: INFO: Pod name my-hostname-basic-1a420144-49a5-4630-b629-57a3a32a827d: Found 1 pods out of 1 Jan 7 14:22:47.331: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-1a420144-49a5-4630-b629-57a3a32a827d" is running Jan 7 14:22:49.351: INFO: Pod "my-hostname-basic-1a420144-49a5-4630-b629-57a3a32a827d-m7gvd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-07 14:22:42 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-07 14:22:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1a420144-49a5-4630-b629-57a3a32a827d]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-07 14:22:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1a420144-49a5-4630-b629-57a3a32a827d]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-07 14:22:42 +0000 UTC Reason: Message:}]) Jan 7 14:22:49.352: INFO: Trying to dial the pod Jan 7 14:22:54.429: INFO: Controller my-hostname-basic-1a420144-49a5-4630-b629-57a3a32a827d: Got expected result from replica 1 [my-hostname-basic-1a420144-49a5-4630-b629-57a3a32a827d-m7gvd]: "my-hostname-basic-1a420144-49a5-4630-b629-57a3a32a827d-m7gvd", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:22:54.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9848" for this suite. Jan 7 14:23:00.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:23:00.627: INFO: namespace replicaset-9848 deletion completed in 6.191009141s • [SLOW TEST:18.481 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:23:00.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 7 14:23:00.689: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:23:11.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9393" for this suite. Jan 7 14:24:03.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:24:03.519: INFO: namespace pods-9393 deletion completed in 52.236620183s • [SLOW TEST:62.891 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:24:03.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Jan 7 14:24:03.586: INFO: Waiting up to 5m0s for pod "var-expansion-1305373f-b82b-42fb-ae63-0497bf49238b" in namespace "var-expansion-670" to be "success or failure" Jan 7 14:24:03.593: INFO: Pod "var-expansion-1305373f-b82b-42fb-ae63-0497bf49238b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.147316ms Jan 7 14:24:05.600: INFO: Pod "var-expansion-1305373f-b82b-42fb-ae63-0497bf49238b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014155986s Jan 7 14:24:07.608: INFO: Pod "var-expansion-1305373f-b82b-42fb-ae63-0497bf49238b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02175593s Jan 7 14:24:09.616: INFO: Pod "var-expansion-1305373f-b82b-42fb-ae63-0497bf49238b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030136157s Jan 7 14:24:11.631: INFO: Pod "var-expansion-1305373f-b82b-42fb-ae63-0497bf49238b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044968456s Jan 7 14:24:13.656: INFO: Pod "var-expansion-1305373f-b82b-42fb-ae63-0497bf49238b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070583731s STEP: Saw pod success Jan 7 14:24:13.657: INFO: Pod "var-expansion-1305373f-b82b-42fb-ae63-0497bf49238b" satisfied condition "success or failure" Jan 7 14:24:13.665: INFO: Trying to get logs from node iruya-node pod var-expansion-1305373f-b82b-42fb-ae63-0497bf49238b container dapi-container: STEP: delete the pod Jan 7 14:24:13.721: INFO: Waiting for pod var-expansion-1305373f-b82b-42fb-ae63-0497bf49238b to disappear Jan 7 14:24:13.757: INFO: Pod var-expansion-1305373f-b82b-42fb-ae63-0497bf49238b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:24:13.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-670" for this suite. Jan 7 14:24:19.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:24:19.966: INFO: namespace var-expansion-670 deletion completed in 6.152343411s • [SLOW TEST:16.446 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:24:19.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Jan 7 14:24:20.070: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:24:20.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7095" for this suite. Jan 7 14:24:26.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:24:26.349: INFO: namespace kubectl-7095 deletion completed in 6.181080458s • [SLOW TEST:6.383 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:24:26.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 7 14:24:35.152: INFO: Successfully updated pod "labelsupdatedf5a7ab6-a3f9-41ec-b3e9-403ba39b3fa1" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:24:39.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1361" for this suite. Jan 7 14:25:01.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:25:01.482: INFO: namespace projected-1361 deletion completed in 22.223522988s • [SLOW TEST:35.133 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:25:01.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-518 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 7 14:25:01.544: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 7 14:25:31.812: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-518 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 7 14:25:31.812: INFO: >>> kubeConfig: /root/.kube/config I0107 14:25:31.936868 8 log.go:172] (0xc000dd6420) (0xc002a84e60) Create stream I0107 14:25:31.937192 8 log.go:172] (0xc000dd6420) (0xc002a84e60) Stream added, broadcasting: 1 I0107 14:25:31.949304 8 log.go:172] (0xc000dd6420) Reply frame received for 1 I0107 14:25:31.949454 8 log.go:172] (0xc000dd6420) (0xc00205b9a0) Create stream I0107 14:25:31.949479 8 log.go:172] (0xc000dd6420) (0xc00205b9a0) Stream added, broadcasting: 3 I0107 14:25:31.952788 8 log.go:172] (0xc000dd6420) Reply frame received for 3 I0107 14:25:31.952854 8 log.go:172] (0xc000dd6420) (0xc002a84f00) Create stream I0107 14:25:31.952880 8 log.go:172] (0xc000dd6420) (0xc002a84f00) Stream added, broadcasting: 5 I0107 14:25:31.955318 8 log.go:172] (0xc000dd6420) Reply frame received for 5 I0107 14:25:32.250180 8 log.go:172] (0xc000dd6420) Data frame received for 3 I0107 14:25:32.250816 8 log.go:172] (0xc00205b9a0) (3) Data frame handling I0107 14:25:32.251012 8 log.go:172] (0xc00205b9a0) (3) Data frame sent I0107 14:25:32.390649 8 log.go:172] (0xc000dd6420) (0xc00205b9a0) Stream removed, broadcasting: 3 I0107 14:25:32.391122 8 log.go:172] (0xc000dd6420) (0xc002a84f00) Stream removed, broadcasting: 5 I0107 14:25:32.391232 8 log.go:172] (0xc000dd6420) Data frame received for 1 I0107 14:25:32.391304 8 log.go:172] (0xc002a84e60) (1) Data frame handling I0107 14:25:32.391334 8 log.go:172] (0xc002a84e60) (1) Data frame sent I0107 14:25:32.391355 8 log.go:172] (0xc000dd6420) (0xc002a84e60) Stream removed, broadcasting: 1 I0107 14:25:32.391406 8 log.go:172] (0xc000dd6420) Go away received I0107 14:25:32.391767 8 log.go:172] (0xc000dd6420) (0xc002a84e60) Stream removed, broadcasting: 1 I0107 14:25:32.391795 8 log.go:172] (0xc000dd6420) (0xc00205b9a0) Stream removed, broadcasting: 3 I0107 14:25:32.391799 8 log.go:172] (0xc000dd6420) (0xc002a84f00) Stream removed, broadcasting: 5 Jan 7 14:25:32.392: INFO: Waiting for endpoints: map[] Jan 7 14:25:32.400: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-518 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 7 14:25:32.400: INFO: >>> kubeConfig: /root/.kube/config I0107 14:25:32.475449 8 log.go:172] (0xc001af2580) (0xc001c923c0) Create stream I0107 14:25:32.475755 8 log.go:172] (0xc001af2580) (0xc001c923c0) Stream added, broadcasting: 1 I0107 14:25:32.483530 8 log.go:172] (0xc001af2580) Reply frame received for 1 I0107 14:25:32.483562 8 log.go:172] (0xc001af2580) (0xc00205bae0) Create stream I0107 14:25:32.483570 8 log.go:172] (0xc001af2580) (0xc00205bae0) Stream added, broadcasting: 3 I0107 14:25:32.485378 8 log.go:172] (0xc001af2580) Reply frame received for 3 I0107 14:25:32.485412 8 log.go:172] (0xc001af2580) (0xc0023ba0a0) Create stream I0107 14:25:32.485423 8 log.go:172] (0xc001af2580) (0xc0023ba0a0) Stream added, broadcasting: 5 I0107 14:25:32.487264 8 log.go:172] (0xc001af2580) Reply frame received for 5 I0107 14:25:32.772534 8 log.go:172] (0xc001af2580) Data frame received for 3 I0107 14:25:32.772676 8 log.go:172] (0xc00205bae0) (3) Data frame handling I0107 14:25:32.772694 8 log.go:172] (0xc00205bae0) (3) Data frame sent I0107 14:25:32.962025 8 log.go:172] (0xc001af2580) Data frame received for 1 I0107 14:25:32.962233 8 log.go:172] (0xc001af2580) (0xc00205bae0) Stream removed, broadcasting: 3 I0107 14:25:32.962307 8 log.go:172] (0xc001c923c0) (1) Data frame handling I0107 14:25:32.962434 8 log.go:172] (0xc001c923c0) (1) Data frame sent I0107 14:25:32.962529 8 log.go:172] (0xc001af2580) (0xc0023ba0a0) Stream removed, broadcasting: 5 I0107 14:25:32.962625 8 log.go:172] (0xc001af2580) (0xc001c923c0) Stream removed, broadcasting: 1 I0107 14:25:32.962694 8 log.go:172] (0xc001af2580) Go away received I0107 14:25:32.963085 8 log.go:172] (0xc001af2580) (0xc001c923c0) Stream removed, broadcasting: 1 I0107 14:25:32.963156 8 log.go:172] (0xc001af2580) (0xc00205bae0) Stream removed, broadcasting: 3 I0107 14:25:32.963218 8 log.go:172] (0xc001af2580) (0xc0023ba0a0) Stream removed, broadcasting: 5 Jan 7 14:25:32.963: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:25:32.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-518" for this suite. Jan 7 14:25:55.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:25:55.129: INFO: namespace pod-network-test-518 deletion completed in 22.141947281s • [SLOW TEST:53.646 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:25:55.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:26:03.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5426" for this suite. Jan 7 14:26:49.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:26:49.487: INFO: namespace kubelet-test-5426 deletion completed in 46.180242498s • [SLOW TEST:54.357 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:26:49.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 7 14:26:49.604: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 7 14:26:49.658: INFO: Number of nodes with available pods: 0 Jan 7 14:26:49.658: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:26:51.061: INFO: Number of nodes with available pods: 0 Jan 7 14:26:51.061: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:26:51.684: INFO: Number of nodes with available pods: 0 Jan 7 14:26:51.685: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:26:52.674: INFO: Number of nodes with available pods: 0 Jan 7 14:26:52.674: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:26:53.679: INFO: Number of nodes with available pods: 0 Jan 7 14:26:53.679: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:26:54.993: INFO: Number of nodes with available pods: 0 Jan 7 14:26:54.994: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:26:55.673: INFO: Number of nodes with available pods: 0 Jan 7 14:26:55.673: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:26:56.717: INFO: Number of nodes with available pods: 0 Jan 7 14:26:56.717: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:26:57.672: INFO: Number of nodes with available pods: 0 Jan 7 14:26:57.672: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:26:58.669: INFO: Number of nodes with available pods: 1 Jan 7 14:26:58.670: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:26:59.672: INFO: Number of nodes with available pods: 2 Jan 7 14:26:59.672: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 7 14:26:59.839: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:26:59.839: INFO: Wrong image for pod: daemon-set-h4ffb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:00.867: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:00.867: INFO: Wrong image for pod: daemon-set-h4ffb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:01.872: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:01.872: INFO: Wrong image for pod: daemon-set-h4ffb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:02.884: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:02.884: INFO: Wrong image for pod: daemon-set-h4ffb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:03.876: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:03.876: INFO: Wrong image for pod: daemon-set-h4ffb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:03.876: INFO: Pod daemon-set-h4ffb is not available Jan 7 14:27:04.873: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:04.873: INFO: Wrong image for pod: daemon-set-h4ffb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:04.873: INFO: Pod daemon-set-h4ffb is not available Jan 7 14:27:05.866: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:05.866: INFO: Wrong image for pod: daemon-set-h4ffb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:05.866: INFO: Pod daemon-set-h4ffb is not available Jan 7 14:27:06.869: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:06.870: INFO: Wrong image for pod: daemon-set-h4ffb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:06.870: INFO: Pod daemon-set-h4ffb is not available Jan 7 14:27:07.907: INFO: Pod daemon-set-855bb is not available Jan 7 14:27:07.907: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:08.867: INFO: Pod daemon-set-855bb is not available Jan 7 14:27:08.868: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:09.880: INFO: Pod daemon-set-855bb is not available Jan 7 14:27:09.880: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:10.861: INFO: Pod daemon-set-855bb is not available Jan 7 14:27:10.861: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:12.285: INFO: Pod daemon-set-855bb is not available Jan 7 14:27:12.285: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:12.865: INFO: Pod daemon-set-855bb is not available Jan 7 14:27:12.865: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:13.881: INFO: Pod daemon-set-855bb is not available Jan 7 14:27:13.881: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:14.881: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:15.867: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:16.871: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:17.873: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:18.876: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:19.876: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:19.876: INFO: Pod daemon-set-bql8s is not available Jan 7 14:27:20.868: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:20.868: INFO: Pod daemon-set-bql8s is not available Jan 7 14:27:21.864: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:21.865: INFO: Pod daemon-set-bql8s is not available Jan 7 14:27:22.867: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:22.868: INFO: Pod daemon-set-bql8s is not available Jan 7 14:27:23.879: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:23.879: INFO: Pod daemon-set-bql8s is not available Jan 7 14:27:24.865: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:24.866: INFO: Pod daemon-set-bql8s is not available Jan 7 14:27:25.877: INFO: Wrong image for pod: daemon-set-bql8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 7 14:27:25.877: INFO: Pod daemon-set-bql8s is not available Jan 7 14:27:26.869: INFO: Pod daemon-set-l49tk is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 7 14:27:26.893: INFO: Number of nodes with available pods: 1 Jan 7 14:27:26.893: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:27:27.944: INFO: Number of nodes with available pods: 1 Jan 7 14:27:27.944: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:27:28.905: INFO: Number of nodes with available pods: 1 Jan 7 14:27:28.905: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:27:29.920: INFO: Number of nodes with available pods: 1 Jan 7 14:27:29.920: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:27:30.911: INFO: Number of nodes with available pods: 1 Jan 7 14:27:30.911: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:27:31.915: INFO: Number of nodes with available pods: 1 Jan 7 14:27:31.915: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:27:32.919: INFO: Number of nodes with available pods: 1 Jan 7 14:27:32.919: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:27:33.930: INFO: Number of nodes with available pods: 1 Jan 7 14:27:33.930: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:27:34.910: INFO: Number of nodes with available pods: 1 Jan 7 14:27:34.910: INFO: Node iruya-node is running more than one daemon pod Jan 7 14:27:35.912: INFO: Number of nodes with available pods: 2 Jan 7 14:27:35.912: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7641, will wait for the garbage collector to delete the pods Jan 7 14:27:36.013: INFO: Deleting DaemonSet.extensions daemon-set took: 8.951085ms Jan 7 14:27:36.314: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.007696ms Jan 7 14:27:48.021: INFO: Number of nodes with available pods: 0 Jan 7 14:27:48.022: INFO: Number of running nodes: 0, number of available pods: 0 Jan 7 14:27:48.024: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7641/daemonsets","resourceVersion":"19660821"},"items":null} Jan 7 14:27:48.026: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7641/pods","resourceVersion":"19660821"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:27:48.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7641" for this suite. Jan 7 14:27:54.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:27:54.231: INFO: namespace daemonsets-7641 deletion completed in 6.194221686s • [SLOW TEST:64.744 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:27:54.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jan 7 14:27:54.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3171' Jan 7 14:27:56.885: INFO: stderr: "" Jan 7 14:27:56.885: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 7 14:27:56.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3171' Jan 7 14:27:57.044: INFO: stderr: "" Jan 7 14:27:57.044: INFO: stdout: "update-demo-nautilus-jf66c update-demo-nautilus-sxvhm " Jan 7 14:27:57.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jf66c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3171' Jan 7 14:27:57.144: INFO: stderr: "" Jan 7 14:27:57.144: INFO: stdout: "" Jan 7 14:27:57.144: INFO: update-demo-nautilus-jf66c is created but not running Jan 7 14:28:02.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3171' Jan 7 14:28:02.407: INFO: stderr: "" Jan 7 14:28:02.407: INFO: stdout: "update-demo-nautilus-jf66c update-demo-nautilus-sxvhm " Jan 7 14:28:02.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jf66c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3171' Jan 7 14:28:02.570: INFO: stderr: "" Jan 7 14:28:02.570: INFO: stdout: "" Jan 7 14:28:02.570: INFO: update-demo-nautilus-jf66c is created but not running Jan 7 14:28:07.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3171' Jan 7 14:28:07.710: INFO: stderr: "" Jan 7 14:28:07.711: INFO: stdout: "update-demo-nautilus-jf66c update-demo-nautilus-sxvhm " Jan 7 14:28:07.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jf66c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3171' Jan 7 14:28:07.825: INFO: stderr: "" Jan 7 14:28:07.825: INFO: stdout: "true" Jan 7 14:28:07.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jf66c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3171' Jan 7 14:28:07.956: INFO: stderr: "" Jan 7 14:28:07.956: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 7 14:28:07.956: INFO: validating pod update-demo-nautilus-jf66c Jan 7 14:28:07.968: INFO: got data: { "image": "nautilus.jpg" } Jan 7 14:28:07.969: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 7 14:28:07.969: INFO: update-demo-nautilus-jf66c is verified up and running Jan 7 14:28:07.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sxvhm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3171' Jan 7 14:28:08.060: INFO: stderr: "" Jan 7 14:28:08.060: INFO: stdout: "true" Jan 7 14:28:08.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sxvhm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3171' Jan 7 14:28:08.200: INFO: stderr: "" Jan 7 14:28:08.200: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 7 14:28:08.200: INFO: validating pod update-demo-nautilus-sxvhm Jan 7 14:28:08.222: INFO: got data: { "image": "nautilus.jpg" } Jan 7 14:28:08.222: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 7 14:28:08.222: INFO: update-demo-nautilus-sxvhm is verified up and running STEP: using delete to clean up resources Jan 7 14:28:08.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3171' Jan 7 14:28:08.316: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 7 14:28:08.316: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 7 14:28:08.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3171' Jan 7 14:28:08.493: INFO: stderr: "No resources found.\n" Jan 7 14:28:08.493: INFO: stdout: "" Jan 7 14:28:08.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3171 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 7 14:28:08.616: INFO: stderr: "" Jan 7 14:28:08.616: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:28:08.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3171" for this suite. Jan 7 14:28:30.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:28:30.751: INFO: namespace kubectl-3171 deletion completed in 22.130211467s • [SLOW TEST:36.519 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:28:30.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 7 14:28:39.030: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:28:39.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9101" for this suite. Jan 7 14:28:45.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:28:45.284: INFO: namespace container-runtime-9101 deletion completed in 6.179303646s • [SLOW TEST:14.532 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:28:45.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 7 14:28:56.088: INFO: Successfully updated pod "labelsupdate9bc76690-865b-4a25-b721-c6a96f3d5241" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 7 14:28:58.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6867" for this suite. Jan 7 14:29:20.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 14:29:20.330: INFO: namespace downward-api-6867 deletion completed in 22.167286369s • [SLOW TEST:35.043 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 7 14:29:20.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 7 14:29:20.511: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 31.383281ms)
Jan  7 14:29:20.521: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.911427ms)
Jan  7 14:29:20.531: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.076474ms)
Jan  7 14:29:20.572: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 41.375054ms)
Jan  7 14:29:20.597: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 24.873906ms)
Jan  7 14:29:20.612: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.907145ms)
Jan  7 14:29:20.644: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 31.65112ms)
Jan  7 14:29:20.659: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.723746ms)
Jan  7 14:29:20.675: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.893206ms)
Jan  7 14:29:20.697: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 21.177893ms)
Jan  7 14:29:20.708: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.534729ms)
Jan  7 14:29:20.721: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.934726ms)
Jan  7 14:29:20.732: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.176827ms)
Jan  7 14:29:20.748: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.15948ms)
Jan  7 14:29:20.755: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.548146ms)
Jan  7 14:29:20.761: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.608098ms)
Jan  7 14:29:20.776: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.294461ms)
Jan  7 14:29:20.785: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.24879ms)
Jan  7 14:29:20.790: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.848827ms)
Jan  7 14:29:20.798: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.846885ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:29:20.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2499" for this suite.
Jan  7 14:29:26.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:29:26.983: INFO: namespace proxy-2499 deletion completed in 6.178311427s

• [SLOW TEST:6.651 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:29:26.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3313
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  7 14:29:27.082: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  7 14:30:05.397: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3313 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 14:30:05.398: INFO: >>> kubeConfig: /root/.kube/config
I0107 14:30:05.492646       8 log.go:172] (0xc002ff0f20) (0xc002ad9c20) Create stream
I0107 14:30:05.492955       8 log.go:172] (0xc002ff0f20) (0xc002ad9c20) Stream added, broadcasting: 1
I0107 14:30:05.504709       8 log.go:172] (0xc002ff0f20) Reply frame received for 1
I0107 14:30:05.504813       8 log.go:172] (0xc002ff0f20) (0xc002ad9cc0) Create stream
I0107 14:30:05.504825       8 log.go:172] (0xc002ff0f20) (0xc002ad9cc0) Stream added, broadcasting: 3
I0107 14:30:05.512371       8 log.go:172] (0xc002ff0f20) Reply frame received for 3
I0107 14:30:05.512457       8 log.go:172] (0xc002ff0f20) (0xc0023bb040) Create stream
I0107 14:30:05.512487       8 log.go:172] (0xc002ff0f20) (0xc0023bb040) Stream added, broadcasting: 5
I0107 14:30:05.515810       8 log.go:172] (0xc002ff0f20) Reply frame received for 5
I0107 14:30:06.668362       8 log.go:172] (0xc002ff0f20) Data frame received for 3
I0107 14:30:06.668574       8 log.go:172] (0xc002ad9cc0) (3) Data frame handling
I0107 14:30:06.668611       8 log.go:172] (0xc002ad9cc0) (3) Data frame sent
I0107 14:30:06.818896       8 log.go:172] (0xc002ff0f20) Data frame received for 1
I0107 14:30:06.819196       8 log.go:172] (0xc002ff0f20) (0xc002ad9cc0) Stream removed, broadcasting: 3
I0107 14:30:06.819310       8 log.go:172] (0xc002ad9c20) (1) Data frame handling
I0107 14:30:06.819359       8 log.go:172] (0xc002ad9c20) (1) Data frame sent
I0107 14:30:06.819411       8 log.go:172] (0xc002ff0f20) (0xc0023bb040) Stream removed, broadcasting: 5
I0107 14:30:06.819504       8 log.go:172] (0xc002ff0f20) (0xc002ad9c20) Stream removed, broadcasting: 1
I0107 14:30:06.819559       8 log.go:172] (0xc002ff0f20) Go away received
I0107 14:30:06.820187       8 log.go:172] (0xc002ff0f20) (0xc002ad9c20) Stream removed, broadcasting: 1
I0107 14:30:06.820218       8 log.go:172] (0xc002ff0f20) (0xc002ad9cc0) Stream removed, broadcasting: 3
I0107 14:30:06.820252       8 log.go:172] (0xc002ff0f20) (0xc0023bb040) Stream removed, broadcasting: 5
Jan  7 14:30:06.820: INFO: Found all expected endpoints: [netserver-0]
Jan  7 14:30:06.835: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3313 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 14:30:06.835: INFO: >>> kubeConfig: /root/.kube/config
I0107 14:30:06.911887       8 log.go:172] (0xc002646a50) (0xc002425f40) Create stream
I0107 14:30:06.912178       8 log.go:172] (0xc002646a50) (0xc002425f40) Stream added, broadcasting: 1
I0107 14:30:06.923666       8 log.go:172] (0xc002646a50) Reply frame received for 1
I0107 14:30:06.923719       8 log.go:172] (0xc002646a50) (0xc0027e61e0) Create stream
I0107 14:30:06.923730       8 log.go:172] (0xc002646a50) (0xc0027e61e0) Stream added, broadcasting: 3
I0107 14:30:06.925497       8 log.go:172] (0xc002646a50) Reply frame received for 3
I0107 14:30:06.925528       8 log.go:172] (0xc002646a50) (0xc0023bb0e0) Create stream
I0107 14:30:06.925543       8 log.go:172] (0xc002646a50) (0xc0023bb0e0) Stream added, broadcasting: 5
I0107 14:30:06.927076       8 log.go:172] (0xc002646a50) Reply frame received for 5
I0107 14:30:08.089174       8 log.go:172] (0xc002646a50) Data frame received for 3
I0107 14:30:08.089330       8 log.go:172] (0xc0027e61e0) (3) Data frame handling
I0107 14:30:08.089366       8 log.go:172] (0xc0027e61e0) (3) Data frame sent
I0107 14:30:08.241044       8 log.go:172] (0xc002646a50) Data frame received for 1
I0107 14:30:08.241127       8 log.go:172] (0xc002425f40) (1) Data frame handling
I0107 14:30:08.241159       8 log.go:172] (0xc002425f40) (1) Data frame sent
I0107 14:30:08.241192       8 log.go:172] (0xc002646a50) (0xc002425f40) Stream removed, broadcasting: 1
I0107 14:30:08.241256       8 log.go:172] (0xc002646a50) (0xc0027e61e0) Stream removed, broadcasting: 3
I0107 14:30:08.241630       8 log.go:172] (0xc002646a50) (0xc0023bb0e0) Stream removed, broadcasting: 5
I0107 14:30:08.241703       8 log.go:172] (0xc002646a50) (0xc002425f40) Stream removed, broadcasting: 1
I0107 14:30:08.241721       8 log.go:172] (0xc002646a50) (0xc0027e61e0) Stream removed, broadcasting: 3
I0107 14:30:08.241730       8 log.go:172] (0xc002646a50) (0xc0023bb0e0) Stream removed, broadcasting: 5
I0107 14:30:08.241975       8 log.go:172] (0xc002646a50) Go away received
Jan  7 14:30:08.242: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:30:08.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3313" for this suite.
Jan  7 14:30:32.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:30:32.422: INFO: namespace pod-network-test-3313 deletion completed in 24.154457995s

• [SLOW TEST:65.438 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:30:32.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-5257/configmap-test-e2aaed15-d144-482d-bedb-2e214bceaa7d
STEP: Creating a pod to test consume configMaps
Jan  7 14:30:32.506: INFO: Waiting up to 5m0s for pod "pod-configmaps-1e1b0954-12d6-4dd2-9798-7b631056af50" in namespace "configmap-5257" to be "success or failure"
Jan  7 14:30:32.517: INFO: Pod "pod-configmaps-1e1b0954-12d6-4dd2-9798-7b631056af50": Phase="Pending", Reason="", readiness=false. Elapsed: 11.270217ms
Jan  7 14:30:34.536: INFO: Pod "pod-configmaps-1e1b0954-12d6-4dd2-9798-7b631056af50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029844936s
Jan  7 14:30:36.557: INFO: Pod "pod-configmaps-1e1b0954-12d6-4dd2-9798-7b631056af50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05126988s
Jan  7 14:30:38.571: INFO: Pod "pod-configmaps-1e1b0954-12d6-4dd2-9798-7b631056af50": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064822137s
Jan  7 14:30:40.604: INFO: Pod "pod-configmaps-1e1b0954-12d6-4dd2-9798-7b631056af50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.097994529s
STEP: Saw pod success
Jan  7 14:30:40.605: INFO: Pod "pod-configmaps-1e1b0954-12d6-4dd2-9798-7b631056af50" satisfied condition "success or failure"
Jan  7 14:30:40.624: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1e1b0954-12d6-4dd2-9798-7b631056af50 container env-test: 
STEP: delete the pod
Jan  7 14:30:40.729: INFO: Waiting for pod pod-configmaps-1e1b0954-12d6-4dd2-9798-7b631056af50 to disappear
Jan  7 14:30:40.759: INFO: Pod pod-configmaps-1e1b0954-12d6-4dd2-9798-7b631056af50 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:30:40.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5257" for this suite.
Jan  7 14:30:46.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:30:47.005: INFO: namespace configmap-5257 deletion completed in 6.227795148s

• [SLOW TEST:14.583 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:30:47.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  7 14:30:47.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:30:57.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8487" for this suite.
Jan  7 14:31:49.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:31:49.338: INFO: namespace pods-8487 deletion completed in 52.164318172s

• [SLOW TEST:62.333 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:31:49.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  7 14:31:49.508: INFO: Waiting up to 5m0s for pod "downwardapi-volume-08a88991-9a30-4830-a608-d3084d1a5c19" in namespace "downward-api-6603" to be "success or failure"
Jan  7 14:31:49.544: INFO: Pod "downwardapi-volume-08a88991-9a30-4830-a608-d3084d1a5c19": Phase="Pending", Reason="", readiness=false. Elapsed: 35.513051ms
Jan  7 14:31:51.557: INFO: Pod "downwardapi-volume-08a88991-9a30-4830-a608-d3084d1a5c19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048665576s
Jan  7 14:31:53.580: INFO: Pod "downwardapi-volume-08a88991-9a30-4830-a608-d3084d1a5c19": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072337156s
Jan  7 14:31:55.593: INFO: Pod "downwardapi-volume-08a88991-9a30-4830-a608-d3084d1a5c19": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085123927s
Jan  7 14:31:57.616: INFO: Pod "downwardapi-volume-08a88991-9a30-4830-a608-d3084d1a5c19": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107936572s
Jan  7 14:31:59.624: INFO: Pod "downwardapi-volume-08a88991-9a30-4830-a608-d3084d1a5c19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.116051141s
STEP: Saw pod success
Jan  7 14:31:59.624: INFO: Pod "downwardapi-volume-08a88991-9a30-4830-a608-d3084d1a5c19" satisfied condition "success or failure"
Jan  7 14:31:59.629: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-08a88991-9a30-4830-a608-d3084d1a5c19 container client-container: 
STEP: delete the pod
Jan  7 14:31:59.806: INFO: Waiting for pod downwardapi-volume-08a88991-9a30-4830-a608-d3084d1a5c19 to disappear
Jan  7 14:31:59.820: INFO: Pod downwardapi-volume-08a88991-9a30-4830-a608-d3084d1a5c19 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:31:59.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6603" for this suite.
Jan  7 14:32:05.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:32:06.058: INFO: namespace downward-api-6603 deletion completed in 6.205287869s

• [SLOW TEST:16.719 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:32:06.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  7 14:32:06.174: INFO: Waiting up to 5m0s for pod "pod-dc1ee75e-a92d-4474-8d93-23299e6ef6b0" in namespace "emptydir-5284" to be "success or failure"
Jan  7 14:32:06.179: INFO: Pod "pod-dc1ee75e-a92d-4474-8d93-23299e6ef6b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.831624ms
Jan  7 14:32:08.188: INFO: Pod "pod-dc1ee75e-a92d-4474-8d93-23299e6ef6b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013123987s
Jan  7 14:32:10.194: INFO: Pod "pod-dc1ee75e-a92d-4474-8d93-23299e6ef6b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019686016s
Jan  7 14:32:12.201: INFO: Pod "pod-dc1ee75e-a92d-4474-8d93-23299e6ef6b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026000721s
Jan  7 14:32:14.214: INFO: Pod "pod-dc1ee75e-a92d-4474-8d93-23299e6ef6b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039713683s
STEP: Saw pod success
Jan  7 14:32:14.215: INFO: Pod "pod-dc1ee75e-a92d-4474-8d93-23299e6ef6b0" satisfied condition "success or failure"
Jan  7 14:32:14.280: INFO: Trying to get logs from node iruya-node pod pod-dc1ee75e-a92d-4474-8d93-23299e6ef6b0 container test-container: 
STEP: delete the pod
Jan  7 14:32:14.335: INFO: Waiting for pod pod-dc1ee75e-a92d-4474-8d93-23299e6ef6b0 to disappear
Jan  7 14:32:14.343: INFO: Pod pod-dc1ee75e-a92d-4474-8d93-23299e6ef6b0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:32:14.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5284" for this suite.
Jan  7 14:32:20.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:32:20.530: INFO: namespace emptydir-5284 deletion completed in 6.17973651s

• [SLOW TEST:14.472 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:32:20.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan  7 14:32:20.617: INFO: namespace kubectl-7840
Jan  7 14:32:20.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7840'
Jan  7 14:32:20.983: INFO: stderr: ""
Jan  7 14:32:20.983: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  7 14:32:22.947: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 14:32:22.947: INFO: Found 0 / 1
Jan  7 14:32:22.999: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 14:32:22.999: INFO: Found 0 / 1
Jan  7 14:32:23.997: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 14:32:23.998: INFO: Found 0 / 1
Jan  7 14:32:25.009: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 14:32:25.010: INFO: Found 0 / 1
Jan  7 14:32:25.993: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 14:32:25.993: INFO: Found 0 / 1
Jan  7 14:32:26.991: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 14:32:26.991: INFO: Found 0 / 1
Jan  7 14:32:28.076: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 14:32:28.076: INFO: Found 0 / 1
Jan  7 14:32:28.992: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 14:32:28.993: INFO: Found 1 / 1
Jan  7 14:32:28.993: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  7 14:32:28.998: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 14:32:28.998: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  7 14:32:28.998: INFO: wait on redis-master startup in kubectl-7840 
Jan  7 14:32:28.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tjxch redis-master --namespace=kubectl-7840'
Jan  7 14:32:29.220: INFO: stderr: ""
Jan  7 14:32:29.220: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 07 Jan 14:32:27.978 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Jan 14:32:27.978 # Server started, Redis version 3.2.12\n1:M 07 Jan 14:32:27.978 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Jan 14:32:27.978 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan  7 14:32:29.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7840'
Jan  7 14:32:29.399: INFO: stderr: ""
Jan  7 14:32:29.399: INFO: stdout: "service/rm2 exposed\n"
Jan  7 14:32:29.435: INFO: Service rm2 in namespace kubectl-7840 found.
STEP: exposing service
Jan  7 14:32:31.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7840'
Jan  7 14:32:31.771: INFO: stderr: ""
Jan  7 14:32:31.771: INFO: stdout: "service/rm3 exposed\n"
Jan  7 14:32:31.856: INFO: Service rm3 in namespace kubectl-7840 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:32:33.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7840" for this suite.
Jan  7 14:33:13.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:33:14.032: INFO: namespace kubectl-7840 deletion completed in 40.155128326s

• [SLOW TEST:53.501 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:33:14.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-481f3eec-94d1-45dd-928f-1263da99642f
STEP: Creating a pod to test consume configMaps
Jan  7 14:33:14.158: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-daac27c2-cacb-4cf8-a6d6-6c969995ad26" in namespace "projected-4821" to be "success or failure"
Jan  7 14:33:14.168: INFO: Pod "pod-projected-configmaps-daac27c2-cacb-4cf8-a6d6-6c969995ad26": Phase="Pending", Reason="", readiness=false. Elapsed: 9.112408ms
Jan  7 14:33:16.185: INFO: Pod "pod-projected-configmaps-daac27c2-cacb-4cf8-a6d6-6c969995ad26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02625746s
Jan  7 14:33:18.194: INFO: Pod "pod-projected-configmaps-daac27c2-cacb-4cf8-a6d6-6c969995ad26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035756193s
Jan  7 14:33:20.202: INFO: Pod "pod-projected-configmaps-daac27c2-cacb-4cf8-a6d6-6c969995ad26": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043726032s
Jan  7 14:33:22.217: INFO: Pod "pod-projected-configmaps-daac27c2-cacb-4cf8-a6d6-6c969995ad26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058907408s
STEP: Saw pod success
Jan  7 14:33:22.218: INFO: Pod "pod-projected-configmaps-daac27c2-cacb-4cf8-a6d6-6c969995ad26" satisfied condition "success or failure"
Jan  7 14:33:22.222: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-daac27c2-cacb-4cf8-a6d6-6c969995ad26 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  7 14:33:22.321: INFO: Waiting for pod pod-projected-configmaps-daac27c2-cacb-4cf8-a6d6-6c969995ad26 to disappear
Jan  7 14:33:22.325: INFO: Pod pod-projected-configmaps-daac27c2-cacb-4cf8-a6d6-6c969995ad26 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:33:22.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4821" for this suite.
Jan  7 14:33:28.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:33:28.568: INFO: namespace projected-4821 deletion completed in 6.231686662s

• [SLOW TEST:14.535 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:33:28.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  7 14:33:28.759: INFO: Waiting up to 5m0s for pod "pod-a60236b2-ca86-4d0c-9bcf-e78f9495729b" in namespace "emptydir-634" to be "success or failure"
Jan  7 14:33:28.778: INFO: Pod "pod-a60236b2-ca86-4d0c-9bcf-e78f9495729b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.788286ms
Jan  7 14:33:30.791: INFO: Pod "pod-a60236b2-ca86-4d0c-9bcf-e78f9495729b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031393662s
Jan  7 14:33:32.808: INFO: Pod "pod-a60236b2-ca86-4d0c-9bcf-e78f9495729b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048333452s
Jan  7 14:33:35.352: INFO: Pod "pod-a60236b2-ca86-4d0c-9bcf-e78f9495729b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.592698672s
Jan  7 14:33:37.361: INFO: Pod "pod-a60236b2-ca86-4d0c-9bcf-e78f9495729b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.601890374s
STEP: Saw pod success
Jan  7 14:33:37.362: INFO: Pod "pod-a60236b2-ca86-4d0c-9bcf-e78f9495729b" satisfied condition "success or failure"
Jan  7 14:33:37.366: INFO: Trying to get logs from node iruya-node pod pod-a60236b2-ca86-4d0c-9bcf-e78f9495729b container test-container: 
STEP: delete the pod
Jan  7 14:33:37.533: INFO: Waiting for pod pod-a60236b2-ca86-4d0c-9bcf-e78f9495729b to disappear
Jan  7 14:33:37.541: INFO: Pod pod-a60236b2-ca86-4d0c-9bcf-e78f9495729b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:33:37.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-634" for this suite.
Jan  7 14:33:43.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:33:43.731: INFO: namespace emptydir-634 deletion completed in 6.174786019s

• [SLOW TEST:15.162 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:33:43.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-eab54615-d799-4352-9612-e0358d011c5a
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:33:56.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7318" for this suite.
Jan  7 14:34:18.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:34:18.158: INFO: namespace configmap-7318 deletion completed in 22.122147305s

• [SLOW TEST:34.426 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:34:18.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:34:26.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-186" for this suite.
Jan  7 14:35:08.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:35:08.562: INFO: namespace kubelet-test-186 deletion completed in 42.166016246s

• [SLOW TEST:50.404 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:35:08.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Jan  7 14:35:08.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2724'
Jan  7 14:35:09.098: INFO: stderr: ""
Jan  7 14:35:09.099: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Jan  7 14:35:10.111: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 14:35:10.111: INFO: Found 0 / 1
Jan  7 14:35:11.110: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 14:35:11.111: INFO: Found 0 / 1
Jan  7 14:35:12.114: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 14:35:12.115: INFO: Found 0 / 1
Jan  7 14:35:13.107: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 14:35:13.107: INFO: Found 0 / 1
Jan  7 14:35:14.114: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 14:35:14.115: INFO: Found 0 / 1
Jan  7 14:35:15.108: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 14:35:15.108: INFO: Found 0 / 1
Jan  7 14:35:16.107: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 14:35:16.107: INFO: Found 0 / 1
Jan  7 14:35:17.109: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 14:35:17.109: INFO: Found 1 / 1
Jan  7 14:35:17.109: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  7 14:35:17.113: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 14:35:17.113: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan  7 14:35:17.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mbclj redis-master --namespace=kubectl-2724'
Jan  7 14:35:17.371: INFO: stderr: ""
Jan  7 14:35:17.372: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 07 Jan 14:35:15.985 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Jan 14:35:15.986 # Server started, Redis version 3.2.12\n1:M 07 Jan 14:35:15.986 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Jan 14:35:15.986 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan  7 14:35:17.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mbclj redis-master --namespace=kubectl-2724 --tail=1'
Jan  7 14:35:17.517: INFO: stderr: ""
Jan  7 14:35:17.518: INFO: stdout: "1:M 07 Jan 14:35:15.986 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan  7 14:35:17.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mbclj redis-master --namespace=kubectl-2724 --limit-bytes=1'
Jan  7 14:35:17.658: INFO: stderr: ""
Jan  7 14:35:17.658: INFO: stdout: " "
STEP: exposing timestamps
Jan  7 14:35:17.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mbclj redis-master --namespace=kubectl-2724 --tail=1 --timestamps'
Jan  7 14:35:17.985: INFO: stderr: ""
Jan  7 14:35:17.985: INFO: stdout: "2020-01-07T14:35:15.986836885Z 1:M 07 Jan 14:35:15.986 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan  7 14:35:20.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mbclj redis-master --namespace=kubectl-2724 --since=1s'
Jan  7 14:35:20.668: INFO: stderr: ""
Jan  7 14:35:20.668: INFO: stdout: ""
Jan  7 14:35:20.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mbclj redis-master --namespace=kubectl-2724 --since=24h'
Jan  7 14:35:20.856: INFO: stderr: ""
Jan  7 14:35:20.857: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 07 Jan 14:35:15.985 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Jan 14:35:15.986 # Server started, Redis version 3.2.12\n1:M 07 Jan 14:35:15.986 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Jan 14:35:15.986 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Jan  7 14:35:20.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2724'
Jan  7 14:35:20.997: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  7 14:35:20.997: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan  7 14:35:20.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-2724'
Jan  7 14:35:21.098: INFO: stderr: "No resources found.\n"
Jan  7 14:35:21.099: INFO: stdout: ""
Jan  7 14:35:21.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-2724 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  7 14:35:21.296: INFO: stderr: ""
Jan  7 14:35:21.296: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:35:21.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2724" for this suite.
Jan  7 14:35:43.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:35:43.434: INFO: namespace kubectl-2724 deletion completed in 22.130868823s

• [SLOW TEST:34.872 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:35:43.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-6779618e-667d-494a-9cb2-5e48b6210669
STEP: Creating a pod to test consume secrets
Jan  7 14:35:43.548: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0702eaaf-6ff9-4a53-8740-f1f91121044b" in namespace "projected-7645" to be "success or failure"
Jan  7 14:35:43.555: INFO: Pod "pod-projected-secrets-0702eaaf-6ff9-4a53-8740-f1f91121044b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.771898ms
Jan  7 14:35:45.641: INFO: Pod "pod-projected-secrets-0702eaaf-6ff9-4a53-8740-f1f91121044b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0920714s
Jan  7 14:35:47.653: INFO: Pod "pod-projected-secrets-0702eaaf-6ff9-4a53-8740-f1f91121044b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104097155s
Jan  7 14:35:49.665: INFO: Pod "pod-projected-secrets-0702eaaf-6ff9-4a53-8740-f1f91121044b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115662457s
Jan  7 14:35:51.674: INFO: Pod "pod-projected-secrets-0702eaaf-6ff9-4a53-8740-f1f91121044b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.125265983s
Jan  7 14:35:53.685: INFO: Pod "pod-projected-secrets-0702eaaf-6ff9-4a53-8740-f1f91121044b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.136273185s
STEP: Saw pod success
Jan  7 14:35:53.686: INFO: Pod "pod-projected-secrets-0702eaaf-6ff9-4a53-8740-f1f91121044b" satisfied condition "success or failure"
Jan  7 14:35:53.692: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-0702eaaf-6ff9-4a53-8740-f1f91121044b container projected-secret-volume-test: 
STEP: delete the pod
Jan  7 14:35:53.822: INFO: Waiting for pod pod-projected-secrets-0702eaaf-6ff9-4a53-8740-f1f91121044b to disappear
Jan  7 14:35:53.838: INFO: Pod pod-projected-secrets-0702eaaf-6ff9-4a53-8740-f1f91121044b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:35:53.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7645" for this suite.
Jan  7 14:35:59.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:35:59.996: INFO: namespace projected-7645 deletion completed in 6.13665376s

• [SLOW TEST:16.561 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:35:59.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  7 14:36:00.066: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b65fffa-002c-451b-b6df-64824ce03a86" in namespace "projected-8675" to be "success or failure"
Jan  7 14:36:00.071: INFO: Pod "downwardapi-volume-8b65fffa-002c-451b-b6df-64824ce03a86": Phase="Pending", Reason="", readiness=false. Elapsed: 5.134654ms
Jan  7 14:36:02.077: INFO: Pod "downwardapi-volume-8b65fffa-002c-451b-b6df-64824ce03a86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011008811s
Jan  7 14:36:04.088: INFO: Pod "downwardapi-volume-8b65fffa-002c-451b-b6df-64824ce03a86": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021744566s
Jan  7 14:36:06.099: INFO: Pod "downwardapi-volume-8b65fffa-002c-451b-b6df-64824ce03a86": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033166042s
Jan  7 14:36:08.116: INFO: Pod "downwardapi-volume-8b65fffa-002c-451b-b6df-64824ce03a86": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049966542s
Jan  7 14:36:10.130: INFO: Pod "downwardapi-volume-8b65fffa-002c-451b-b6df-64824ce03a86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064074123s
STEP: Saw pod success
Jan  7 14:36:10.130: INFO: Pod "downwardapi-volume-8b65fffa-002c-451b-b6df-64824ce03a86" satisfied condition "success or failure"
Jan  7 14:36:10.138: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8b65fffa-002c-451b-b6df-64824ce03a86 container client-container: 
STEP: delete the pod
Jan  7 14:36:10.212: INFO: Waiting for pod downwardapi-volume-8b65fffa-002c-451b-b6df-64824ce03a86 to disappear
Jan  7 14:36:10.302: INFO: Pod downwardapi-volume-8b65fffa-002c-451b-b6df-64824ce03a86 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:36:10.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8675" for this suite.
Jan  7 14:36:16.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:36:16.513: INFO: namespace projected-8675 deletion completed in 6.20359041s

• [SLOW TEST:16.515 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:36:16.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-sqt7
STEP: Creating a pod to test atomic-volume-subpath
Jan  7 14:36:16.675: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-sqt7" in namespace "subpath-4864" to be "success or failure"
Jan  7 14:36:16.710: INFO: Pod "pod-subpath-test-projected-sqt7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.985591ms
Jan  7 14:36:18.731: INFO: Pod "pod-subpath-test-projected-sqt7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055347621s
Jan  7 14:36:20.752: INFO: Pod "pod-subpath-test-projected-sqt7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076472871s
Jan  7 14:36:22.757: INFO: Pod "pod-subpath-test-projected-sqt7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081737731s
Jan  7 14:36:24.779: INFO: Pod "pod-subpath-test-projected-sqt7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103588548s
Jan  7 14:36:26.787: INFO: Pod "pod-subpath-test-projected-sqt7": Phase="Running", Reason="", readiness=true. Elapsed: 10.111351532s
Jan  7 14:36:28.797: INFO: Pod "pod-subpath-test-projected-sqt7": Phase="Running", Reason="", readiness=true. Elapsed: 12.121272715s
Jan  7 14:36:30.860: INFO: Pod "pod-subpath-test-projected-sqt7": Phase="Running", Reason="", readiness=true. Elapsed: 14.184261182s
Jan  7 14:36:32.888: INFO: Pod "pod-subpath-test-projected-sqt7": Phase="Running", Reason="", readiness=true. Elapsed: 16.21291175s
Jan  7 14:36:34.898: INFO: Pod "pod-subpath-test-projected-sqt7": Phase="Running", Reason="", readiness=true. Elapsed: 18.222286361s
Jan  7 14:36:36.930: INFO: Pod "pod-subpath-test-projected-sqt7": Phase="Running", Reason="", readiness=true. Elapsed: 20.255180659s
Jan  7 14:36:38.977: INFO: Pod "pod-subpath-test-projected-sqt7": Phase="Running", Reason="", readiness=true. Elapsed: 22.301743549s
Jan  7 14:36:40.987: INFO: Pod "pod-subpath-test-projected-sqt7": Phase="Running", Reason="", readiness=true. Elapsed: 24.311594334s
Jan  7 14:36:42.995: INFO: Pod "pod-subpath-test-projected-sqt7": Phase="Running", Reason="", readiness=true. Elapsed: 26.31949155s
Jan  7 14:36:45.001: INFO: Pod "pod-subpath-test-projected-sqt7": Phase="Running", Reason="", readiness=true. Elapsed: 28.326204005s
Jan  7 14:36:47.784: INFO: Pod "pod-subpath-test-projected-sqt7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.109163933s
STEP: Saw pod success
Jan  7 14:36:47.785: INFO: Pod "pod-subpath-test-projected-sqt7" satisfied condition "success or failure"
Jan  7 14:36:47.802: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-sqt7 container test-container-subpath-projected-sqt7: 
STEP: delete the pod
Jan  7 14:36:47.938: INFO: Waiting for pod pod-subpath-test-projected-sqt7 to disappear
Jan  7 14:36:47.948: INFO: Pod pod-subpath-test-projected-sqt7 no longer exists
STEP: Deleting pod pod-subpath-test-projected-sqt7
Jan  7 14:36:47.948: INFO: Deleting pod "pod-subpath-test-projected-sqt7" in namespace "subpath-4864"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:36:47.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4864" for this suite.
Jan  7 14:36:54.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:36:54.164: INFO: namespace subpath-4864 deletion completed in 6.200255707s

• [SLOW TEST:37.650 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:36:54.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  7 14:36:54.277: INFO: Waiting up to 5m0s for pod "pod-ae019374-4436-47d7-b54c-19905de8409f" in namespace "emptydir-9375" to be "success or failure"
Jan  7 14:36:54.288: INFO: Pod "pod-ae019374-4436-47d7-b54c-19905de8409f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.282713ms
Jan  7 14:36:56.297: INFO: Pod "pod-ae019374-4436-47d7-b54c-19905de8409f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019604311s
Jan  7 14:36:58.307: INFO: Pod "pod-ae019374-4436-47d7-b54c-19905de8409f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028815953s
Jan  7 14:37:00.341: INFO: Pod "pod-ae019374-4436-47d7-b54c-19905de8409f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063678375s
Jan  7 14:37:02.390: INFO: Pod "pod-ae019374-4436-47d7-b54c-19905de8409f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.112007704s
STEP: Saw pod success
Jan  7 14:37:02.390: INFO: Pod "pod-ae019374-4436-47d7-b54c-19905de8409f" satisfied condition "success or failure"
Jan  7 14:37:02.394: INFO: Trying to get logs from node iruya-node pod pod-ae019374-4436-47d7-b54c-19905de8409f container test-container: 
STEP: delete the pod
Jan  7 14:37:02.477: INFO: Waiting for pod pod-ae019374-4436-47d7-b54c-19905de8409f to disappear
Jan  7 14:37:02.483: INFO: Pod pod-ae019374-4436-47d7-b54c-19905de8409f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:37:02.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9375" for this suite.
Jan  7 14:37:08.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:37:08.700: INFO: namespace emptydir-9375 deletion completed in 6.211500106s

• [SLOW TEST:14.536 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:37:08.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6941.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6941.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  7 14:37:20.890: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-6941/dns-test-98d0f543-8cfc-438d-b143-3c02db26d90c: the server could not find the requested resource (get pods dns-test-98d0f543-8cfc-438d-b143-3c02db26d90c)
Jan  7 14:37:20.903: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-6941/dns-test-98d0f543-8cfc-438d-b143-3c02db26d90c: the server could not find the requested resource (get pods dns-test-98d0f543-8cfc-438d-b143-3c02db26d90c)
Jan  7 14:37:20.922: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6941/dns-test-98d0f543-8cfc-438d-b143-3c02db26d90c: the server could not find the requested resource (get pods dns-test-98d0f543-8cfc-438d-b143-3c02db26d90c)
Jan  7 14:37:20.935: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6941/dns-test-98d0f543-8cfc-438d-b143-3c02db26d90c: the server could not find the requested resource (get pods dns-test-98d0f543-8cfc-438d-b143-3c02db26d90c)
Jan  7 14:37:20.944: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-6941/dns-test-98d0f543-8cfc-438d-b143-3c02db26d90c: the server could not find the requested resource (get pods dns-test-98d0f543-8cfc-438d-b143-3c02db26d90c)
Jan  7 14:37:20.948: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-6941/dns-test-98d0f543-8cfc-438d-b143-3c02db26d90c: the server could not find the requested resource (get pods dns-test-98d0f543-8cfc-438d-b143-3c02db26d90c)
Jan  7 14:37:20.952: INFO: Unable to read jessie_udp@PodARecord from pod dns-6941/dns-test-98d0f543-8cfc-438d-b143-3c02db26d90c: the server could not find the requested resource (get pods dns-test-98d0f543-8cfc-438d-b143-3c02db26d90c)
Jan  7 14:37:20.956: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6941/dns-test-98d0f543-8cfc-438d-b143-3c02db26d90c: the server could not find the requested resource (get pods dns-test-98d0f543-8cfc-438d-b143-3c02db26d90c)
Jan  7 14:37:20.956: INFO: Lookups using dns-6941/dns-test-98d0f543-8cfc-438d-b143-3c02db26d90c failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan  7 14:37:26.050: INFO: DNS probes using dns-6941/dns-test-98d0f543-8cfc-438d-b143-3c02db26d90c succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:37:26.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6941" for this suite.
Jan  7 14:37:32.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:37:32.313: INFO: namespace dns-6941 deletion completed in 6.152434189s

• [SLOW TEST:23.613 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:37:32.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  7 14:37:41.195: INFO: Successfully updated pod "annotationupdatea67d2126-19f0-46d8-b5d7-4cecad839c4a"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:37:43.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5659" for this suite.
Jan  7 14:38:05.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:38:05.435: INFO: namespace downward-api-5659 deletion completed in 22.159712324s

• [SLOW TEST:33.121 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:38:05.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-0c84d418-93bb-4927-b7bb-d8813b576492
STEP: Creating a pod to test consume configMaps
Jan  7 14:38:05.561: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-28121c1b-b47b-41fb-82eb-6a734ba4403c" in namespace "projected-1376" to be "success or failure"
Jan  7 14:38:05.592: INFO: Pod "pod-projected-configmaps-28121c1b-b47b-41fb-82eb-6a734ba4403c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.354253ms
Jan  7 14:38:07.600: INFO: Pod "pod-projected-configmaps-28121c1b-b47b-41fb-82eb-6a734ba4403c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039189692s
Jan  7 14:38:09.610: INFO: Pod "pod-projected-configmaps-28121c1b-b47b-41fb-82eb-6a734ba4403c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048523364s
Jan  7 14:38:11.654: INFO: Pod "pod-projected-configmaps-28121c1b-b47b-41fb-82eb-6a734ba4403c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093418983s
Jan  7 14:38:13.681: INFO: Pod "pod-projected-configmaps-28121c1b-b47b-41fb-82eb-6a734ba4403c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.11959842s
STEP: Saw pod success
Jan  7 14:38:13.681: INFO: Pod "pod-projected-configmaps-28121c1b-b47b-41fb-82eb-6a734ba4403c" satisfied condition "success or failure"
Jan  7 14:38:13.688: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-28121c1b-b47b-41fb-82eb-6a734ba4403c container projected-configmap-volume-test: 
STEP: delete the pod
Jan  7 14:38:13.846: INFO: Waiting for pod pod-projected-configmaps-28121c1b-b47b-41fb-82eb-6a734ba4403c to disappear
Jan  7 14:38:13.864: INFO: Pod pod-projected-configmaps-28121c1b-b47b-41fb-82eb-6a734ba4403c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:38:13.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1376" for this suite.
Jan  7 14:38:19.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:38:20.114: INFO: namespace projected-1376 deletion completed in 6.230311944s

• [SLOW TEST:14.679 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:38:20.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-f7a5e0a4-8af4-455b-a522-dce8ff81dfd0 in namespace container-probe-1752
Jan  7 14:38:28.246: INFO: Started pod busybox-f7a5e0a4-8af4-455b-a522-dce8ff81dfd0 in namespace container-probe-1752
STEP: checking the pod's current state and verifying that restartCount is present
Jan  7 14:38:28.250: INFO: Initial restart count of pod busybox-f7a5e0a4-8af4-455b-a522-dce8ff81dfd0 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:42:28.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1752" for this suite.
Jan  7 14:42:34.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:42:34.785: INFO: namespace container-probe-1752 deletion completed in 6.110273802s

• [SLOW TEST:254.670 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:42:34.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  7 14:42:34.845: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:42:48.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3841" for this suite.
Jan  7 14:42:54.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:42:54.438: INFO: namespace init-container-3841 deletion completed in 6.176258544s

• [SLOW TEST:19.653 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:42:54.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2964.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2964.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2964.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2964.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2964.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2964.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2964.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2964.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2964.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2964.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2964.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2964.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2964.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 190.142.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.142.190_udp@PTR;check="$$(dig +tcp +noall +answer +search 190.142.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.142.190_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2964.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2964.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2964.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2964.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2964.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2964.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2964.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2964.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2964.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2964.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2964.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2964.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2964.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 190.142.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.142.190_udp@PTR;check="$$(dig +tcp +noall +answer +search 190.142.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.142.190_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  7 14:43:06.748: INFO: Unable to read wheezy_udp@dns-test-service.dns-2964.svc.cluster.local from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.754: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2964.svc.cluster.local from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.760: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2964.svc.cluster.local from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.764: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2964.svc.cluster.local from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.768: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-2964.svc.cluster.local from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.777: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-2964.svc.cluster.local from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.780: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.784: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.787: INFO: Unable to read 10.99.142.190_udp@PTR from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.791: INFO: Unable to read 10.99.142.190_tcp@PTR from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.797: INFO: Unable to read jessie_udp@dns-test-service.dns-2964.svc.cluster.local from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.805: INFO: Unable to read jessie_tcp@dns-test-service.dns-2964.svc.cluster.local from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.809: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2964.svc.cluster.local from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.814: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2964.svc.cluster.local from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.818: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-2964.svc.cluster.local from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.824: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-2964.svc.cluster.local from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.828: INFO: Unable to read jessie_udp@PodARecord from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.833: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.837: INFO: Unable to read 10.99.142.190_udp@PTR from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.841: INFO: Unable to read 10.99.142.190_tcp@PTR from pod dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b: the server could not find the requested resource (get pods dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b)
Jan  7 14:43:06.841: INFO: Lookups using dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b failed for: [wheezy_udp@dns-test-service.dns-2964.svc.cluster.local wheezy_tcp@dns-test-service.dns-2964.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2964.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2964.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-2964.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-2964.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.99.142.190_udp@PTR 10.99.142.190_tcp@PTR jessie_udp@dns-test-service.dns-2964.svc.cluster.local jessie_tcp@dns-test-service.dns-2964.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2964.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2964.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-2964.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-2964.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.99.142.190_udp@PTR 10.99.142.190_tcp@PTR]

Jan  7 14:43:11.998: INFO: DNS probes using dns-2964/dns-test-ec8fd590-ea28-450e-8d1a-5295a558307b succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:43:12.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2964" for this suite.
Jan  7 14:43:18.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:43:18.586: INFO: namespace dns-2964 deletion completed in 6.229658s

• [SLOW TEST:24.147 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:43:18.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  7 14:43:25.962: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:43:25.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8439" for this suite.
Jan  7 14:43:32.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:43:32.495: INFO: namespace container-runtime-8439 deletion completed in 6.496051816s

• [SLOW TEST:13.908 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:43:32.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:43:32.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8281" for this suite.
Jan  7 14:43:38.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:43:38.895: INFO: namespace kubelet-test-8281 deletion completed in 6.166153846s

• [SLOW TEST:6.399 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:43:38.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Jan  7 14:43:39.018: INFO: Waiting up to 5m0s for pod "pod-9842f33b-dc4a-4962-a269-0d6bb1f63903" in namespace "emptydir-8178" to be "success or failure"
Jan  7 14:43:39.037: INFO: Pod "pod-9842f33b-dc4a-4962-a269-0d6bb1f63903": Phase="Pending", Reason="", readiness=false. Elapsed: 19.150586ms
Jan  7 14:43:41.046: INFO: Pod "pod-9842f33b-dc4a-4962-a269-0d6bb1f63903": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027961882s
Jan  7 14:43:43.062: INFO: Pod "pod-9842f33b-dc4a-4962-a269-0d6bb1f63903": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043889256s
Jan  7 14:43:45.075: INFO: Pod "pod-9842f33b-dc4a-4962-a269-0d6bb1f63903": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056526953s
Jan  7 14:43:47.084: INFO: Pod "pod-9842f33b-dc4a-4962-a269-0d6bb1f63903": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066144273s
Jan  7 14:43:49.144: INFO: Pod "pod-9842f33b-dc4a-4962-a269-0d6bb1f63903": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.125952405s
STEP: Saw pod success
Jan  7 14:43:49.144: INFO: Pod "pod-9842f33b-dc4a-4962-a269-0d6bb1f63903" satisfied condition "success or failure"
Jan  7 14:43:49.151: INFO: Trying to get logs from node iruya-node pod pod-9842f33b-dc4a-4962-a269-0d6bb1f63903 container test-container: 
STEP: delete the pod
Jan  7 14:43:49.384: INFO: Waiting for pod pod-9842f33b-dc4a-4962-a269-0d6bb1f63903 to disappear
Jan  7 14:43:49.392: INFO: Pod pod-9842f33b-dc4a-4962-a269-0d6bb1f63903 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:43:49.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8178" for this suite.
Jan  7 14:43:55.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:43:55.561: INFO: namespace emptydir-8178 deletion completed in 6.162088105s

• [SLOW TEST:16.666 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:43:55.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:44:04.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4324" for this suite.
Jan  7 14:44:26.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:44:26.875: INFO: namespace replication-controller-4324 deletion completed in 22.149825105s

• [SLOW TEST:31.314 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:44:26.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  7 14:44:26.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-3559'
Jan  7 14:44:28.957: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  7 14:44:28.957: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Jan  7 14:44:31.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3559'
Jan  7 14:44:31.276: INFO: stderr: ""
Jan  7 14:44:31.276: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:44:31.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3559" for this suite.
Jan  7 14:44:37.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:44:37.442: INFO: namespace kubectl-3559 deletion completed in 6.160137106s

• [SLOW TEST:10.566 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:44:37.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0107 14:44:48.209390       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  7 14:44:48.209: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:44:48.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8090" for this suite.
Jan  7 14:44:54.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:44:54.376: INFO: namespace gc-8090 deletion completed in 6.162572247s

• [SLOW TEST:16.934 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:44:54.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  7 14:44:54.524: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ce24d7c-c66d-48b3-83af-8e417ae7472a" in namespace "projected-4692" to be "success or failure"
Jan  7 14:44:54.538: INFO: Pod "downwardapi-volume-0ce24d7c-c66d-48b3-83af-8e417ae7472a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.11553ms
Jan  7 14:44:56.552: INFO: Pod "downwardapi-volume-0ce24d7c-c66d-48b3-83af-8e417ae7472a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027499736s
Jan  7 14:44:58.597: INFO: Pod "downwardapi-volume-0ce24d7c-c66d-48b3-83af-8e417ae7472a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072240082s
Jan  7 14:45:00.617: INFO: Pod "downwardapi-volume-0ce24d7c-c66d-48b3-83af-8e417ae7472a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092923939s
Jan  7 14:45:02.633: INFO: Pod "downwardapi-volume-0ce24d7c-c66d-48b3-83af-8e417ae7472a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.108524469s
STEP: Saw pod success
Jan  7 14:45:02.633: INFO: Pod "downwardapi-volume-0ce24d7c-c66d-48b3-83af-8e417ae7472a" satisfied condition "success or failure"
Jan  7 14:45:02.637: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0ce24d7c-c66d-48b3-83af-8e417ae7472a container client-container: 
STEP: delete the pod
Jan  7 14:45:02.713: INFO: Waiting for pod downwardapi-volume-0ce24d7c-c66d-48b3-83af-8e417ae7472a to disappear
Jan  7 14:45:02.733: INFO: Pod downwardapi-volume-0ce24d7c-c66d-48b3-83af-8e417ae7472a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:45:02.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4692" for this suite.
Jan  7 14:45:08.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:45:08.931: INFO: namespace projected-4692 deletion completed in 6.189503711s

• [SLOW TEST:14.555 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:45:08.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan  7 14:45:31.169: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5219 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 14:45:31.169: INFO: >>> kubeConfig: /root/.kube/config
I0107 14:45:31.252287       8 log.go:172] (0xc001082b00) (0xc0027a59a0) Create stream
I0107 14:45:31.252493       8 log.go:172] (0xc001082b00) (0xc0027a59a0) Stream added, broadcasting: 1
I0107 14:45:31.262969       8 log.go:172] (0xc001082b00) Reply frame received for 1
I0107 14:45:31.263153       8 log.go:172] (0xc001082b00) (0xc0030610e0) Create stream
I0107 14:45:31.263199       8 log.go:172] (0xc001082b00) (0xc0030610e0) Stream added, broadcasting: 3
I0107 14:45:31.265561       8 log.go:172] (0xc001082b00) Reply frame received for 3
I0107 14:45:31.265602       8 log.go:172] (0xc001082b00) (0xc0027a5a40) Create stream
I0107 14:45:31.265620       8 log.go:172] (0xc001082b00) (0xc0027a5a40) Stream added, broadcasting: 5
I0107 14:45:31.268339       8 log.go:172] (0xc001082b00) Reply frame received for 5
I0107 14:45:31.369163       8 log.go:172] (0xc001082b00) Data frame received for 3
I0107 14:45:31.369236       8 log.go:172] (0xc0030610e0) (3) Data frame handling
I0107 14:45:31.369273       8 log.go:172] (0xc0030610e0) (3) Data frame sent
I0107 14:45:31.519309       8 log.go:172] (0xc001082b00) (0xc0030610e0) Stream removed, broadcasting: 3
I0107 14:45:31.519787       8 log.go:172] (0xc001082b00) Data frame received for 1
I0107 14:45:31.519810       8 log.go:172] (0xc0027a59a0) (1) Data frame handling
I0107 14:45:31.519843       8 log.go:172] (0xc0027a59a0) (1) Data frame sent
I0107 14:45:31.519859       8 log.go:172] (0xc001082b00) (0xc0027a59a0) Stream removed, broadcasting: 1
I0107 14:45:31.520662       8 log.go:172] (0xc001082b00) (0xc0027a5a40) Stream removed, broadcasting: 5
I0107 14:45:31.520749       8 log.go:172] (0xc001082b00) (0xc0027a59a0) Stream removed, broadcasting: 1
I0107 14:45:31.520778       8 log.go:172] (0xc001082b00) (0xc0030610e0) Stream removed, broadcasting: 3
I0107 14:45:31.520790       8 log.go:172] (0xc001082b00) (0xc0027a5a40) Stream removed, broadcasting: 5
Jan  7 14:45:31.520: INFO: Exec stderr: ""
Jan  7 14:45:31.521: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5219 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 14:45:31.521: INFO: >>> kubeConfig: /root/.kube/config
I0107 14:45:31.521721       8 log.go:172] (0xc001082b00) Go away received
I0107 14:45:31.592093       8 log.go:172] (0xc002ff14a0) (0xc003061400) Create stream
I0107 14:45:31.592194       8 log.go:172] (0xc002ff14a0) (0xc003061400) Stream added, broadcasting: 1
I0107 14:45:31.597878       8 log.go:172] (0xc002ff14a0) Reply frame received for 1
I0107 14:45:31.597925       8 log.go:172] (0xc002ff14a0) (0xc0027a5ae0) Create stream
I0107 14:45:31.597935       8 log.go:172] (0xc002ff14a0) (0xc0027a5ae0) Stream added, broadcasting: 3
I0107 14:45:31.601279       8 log.go:172] (0xc002ff14a0) Reply frame received for 3
I0107 14:45:31.601495       8 log.go:172] (0xc002ff14a0) (0xc0027a5c20) Create stream
I0107 14:45:31.601518       8 log.go:172] (0xc002ff14a0) (0xc0027a5c20) Stream added, broadcasting: 5
I0107 14:45:31.604675       8 log.go:172] (0xc002ff14a0) Reply frame received for 5
I0107 14:45:31.693179       8 log.go:172] (0xc002ff14a0) Data frame received for 3
I0107 14:45:31.693332       8 log.go:172] (0xc0027a5ae0) (3) Data frame handling
I0107 14:45:31.693361       8 log.go:172] (0xc0027a5ae0) (3) Data frame sent
I0107 14:45:31.848543       8 log.go:172] (0xc002ff14a0) Data frame received for 1
I0107 14:45:31.848814       8 log.go:172] (0xc002ff14a0) (0xc0027a5c20) Stream removed, broadcasting: 5
I0107 14:45:31.848937       8 log.go:172] (0xc003061400) (1) Data frame handling
I0107 14:45:31.848969       8 log.go:172] (0xc002ff14a0) (0xc0027a5ae0) Stream removed, broadcasting: 3
I0107 14:45:31.848986       8 log.go:172] (0xc003061400) (1) Data frame sent
I0107 14:45:31.848998       8 log.go:172] (0xc002ff14a0) (0xc003061400) Stream removed, broadcasting: 1
I0107 14:45:31.849016       8 log.go:172] (0xc002ff14a0) Go away received
I0107 14:45:31.849460       8 log.go:172] (0xc002ff14a0) (0xc003061400) Stream removed, broadcasting: 1
I0107 14:45:31.849507       8 log.go:172] (0xc002ff14a0) (0xc0027a5ae0) Stream removed, broadcasting: 3
I0107 14:45:31.849522       8 log.go:172] (0xc002ff14a0) (0xc0027a5c20) Stream removed, broadcasting: 5
Jan  7 14:45:31.849: INFO: Exec stderr: ""
Jan  7 14:45:31.849: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5219 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 14:45:31.850: INFO: >>> kubeConfig: /root/.kube/config
I0107 14:45:31.927140       8 log.go:172] (0xc0024a6840) (0xc0023bb180) Create stream
I0107 14:45:31.927217       8 log.go:172] (0xc0024a6840) (0xc0023bb180) Stream added, broadcasting: 1
I0107 14:45:31.933621       8 log.go:172] (0xc0024a6840) Reply frame received for 1
I0107 14:45:31.933659       8 log.go:172] (0xc0024a6840) (0xc00293b360) Create stream
I0107 14:45:31.933673       8 log.go:172] (0xc0024a6840) (0xc00293b360) Stream added, broadcasting: 3
I0107 14:45:31.937065       8 log.go:172] (0xc0024a6840) Reply frame received for 3
I0107 14:45:31.937083       8 log.go:172] (0xc0024a6840) (0xc0023bb2c0) Create stream
I0107 14:45:31.937092       8 log.go:172] (0xc0024a6840) (0xc0023bb2c0) Stream added, broadcasting: 5
I0107 14:45:31.938806       8 log.go:172] (0xc0024a6840) Reply frame received for 5
I0107 14:45:32.042388       8 log.go:172] (0xc0024a6840) Data frame received for 3
I0107 14:45:32.042496       8 log.go:172] (0xc00293b360) (3) Data frame handling
I0107 14:45:32.042529       8 log.go:172] (0xc00293b360) (3) Data frame sent
I0107 14:45:32.259194       8 log.go:172] (0xc0024a6840) Data frame received for 1
I0107 14:45:32.259442       8 log.go:172] (0xc0023bb180) (1) Data frame handling
I0107 14:45:32.259517       8 log.go:172] (0xc0023bb180) (1) Data frame sent
I0107 14:45:32.259855       8 log.go:172] (0xc0024a6840) (0xc0023bb180) Stream removed, broadcasting: 1
I0107 14:45:32.260058       8 log.go:172] (0xc0024a6840) (0xc00293b360) Stream removed, broadcasting: 3
I0107 14:45:32.260218       8 log.go:172] (0xc0024a6840) (0xc0023bb2c0) Stream removed, broadcasting: 5
I0107 14:45:32.260312       8 log.go:172] (0xc0024a6840) Go away received
I0107 14:45:32.260665       8 log.go:172] (0xc0024a6840) (0xc0023bb180) Stream removed, broadcasting: 1
I0107 14:45:32.260779       8 log.go:172] (0xc0024a6840) (0xc00293b360) Stream removed, broadcasting: 3
I0107 14:45:32.260788       8 log.go:172] (0xc0024a6840) (0xc0023bb2c0) Stream removed, broadcasting: 5
Jan  7 14:45:32.260: INFO: Exec stderr: ""
Jan  7 14:45:32.261: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5219 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 14:45:32.261: INFO: >>> kubeConfig: /root/.kube/config
I0107 14:45:32.310467       8 log.go:172] (0xc0018b51e0) (0xc0027e6dc0) Create stream
I0107 14:45:32.310584       8 log.go:172] (0xc0018b51e0) (0xc0027e6dc0) Stream added, broadcasting: 1
I0107 14:45:32.317001       8 log.go:172] (0xc0018b51e0) Reply frame received for 1
I0107 14:45:32.317028       8 log.go:172] (0xc0018b51e0) (0xc001a4db80) Create stream
I0107 14:45:32.317035       8 log.go:172] (0xc0018b51e0) (0xc001a4db80) Stream added, broadcasting: 3
I0107 14:45:32.317870       8 log.go:172] (0xc0018b51e0) Reply frame received for 3
I0107 14:45:32.317892       8 log.go:172] (0xc0018b51e0) (0xc0027e6e60) Create stream
I0107 14:45:32.317897       8 log.go:172] (0xc0018b51e0) (0xc0027e6e60) Stream added, broadcasting: 5
I0107 14:45:32.318958       8 log.go:172] (0xc0018b51e0) Reply frame received for 5
I0107 14:45:32.401910       8 log.go:172] (0xc0018b51e0) Data frame received for 3
I0107 14:45:32.402210       8 log.go:172] (0xc001a4db80) (3) Data frame handling
I0107 14:45:32.402239       8 log.go:172] (0xc001a4db80) (3) Data frame sent
I0107 14:45:32.717187       8 log.go:172] (0xc0018b51e0) Data frame received for 1
I0107 14:45:32.717349       8 log.go:172] (0xc0018b51e0) (0xc0027e6e60) Stream removed, broadcasting: 5
I0107 14:45:32.717383       8 log.go:172] (0xc0027e6dc0) (1) Data frame handling
I0107 14:45:32.717408       8 log.go:172] (0xc0027e6dc0) (1) Data frame sent
I0107 14:45:32.717460       8 log.go:172] (0xc0018b51e0) (0xc001a4db80) Stream removed, broadcasting: 3
I0107 14:45:32.717497       8 log.go:172] (0xc0018b51e0) (0xc0027e6dc0) Stream removed, broadcasting: 1
I0107 14:45:32.717541       8 log.go:172] (0xc0018b51e0) Go away received
I0107 14:45:32.717727       8 log.go:172] (0xc0018b51e0) (0xc0027e6dc0) Stream removed, broadcasting: 1
I0107 14:45:32.717757       8 log.go:172] (0xc0018b51e0) (0xc001a4db80) Stream removed, broadcasting: 3
I0107 14:45:32.717776       8 log.go:172] (0xc0018b51e0) (0xc0027e6e60) Stream removed, broadcasting: 5
Jan  7 14:45:32.717: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan  7 14:45:32.717: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5219 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 14:45:32.718: INFO: >>> kubeConfig: /root/.kube/config
I0107 14:45:32.855636       8 log.go:172] (0xc0024a7550) (0xc0023bb5e0) Create stream
I0107 14:45:32.856079       8 log.go:172] (0xc0024a7550) (0xc0023bb5e0) Stream added, broadcasting: 1
I0107 14:45:32.868563       8 log.go:172] (0xc0024a7550) Reply frame received for 1
I0107 14:45:32.868845       8 log.go:172] (0xc0024a7550) (0xc0030614a0) Create stream
I0107 14:45:32.868904       8 log.go:172] (0xc0024a7550) (0xc0030614a0) Stream added, broadcasting: 3
I0107 14:45:32.871044       8 log.go:172] (0xc0024a7550) Reply frame received for 3
I0107 14:45:32.871080       8 log.go:172] (0xc0024a7550) (0xc0023bb720) Create stream
I0107 14:45:32.871090       8 log.go:172] (0xc0024a7550) (0xc0023bb720) Stream added, broadcasting: 5
I0107 14:45:32.872734       8 log.go:172] (0xc0024a7550) Reply frame received for 5
I0107 14:45:32.994684       8 log.go:172] (0xc0024a7550) Data frame received for 3
I0107 14:45:32.994820       8 log.go:172] (0xc0030614a0) (3) Data frame handling
I0107 14:45:32.994843       8 log.go:172] (0xc0030614a0) (3) Data frame sent
I0107 14:45:33.105459       8 log.go:172] (0xc0024a7550) (0xc0030614a0) Stream removed, broadcasting: 3
I0107 14:45:33.105615       8 log.go:172] (0xc0024a7550) Data frame received for 1
I0107 14:45:33.105646       8 log.go:172] (0xc0023bb5e0) (1) Data frame handling
I0107 14:45:33.105678       8 log.go:172] (0xc0023bb5e0) (1) Data frame sent
I0107 14:45:33.105692       8 log.go:172] (0xc0024a7550) (0xc0023bb5e0) Stream removed, broadcasting: 1
I0107 14:45:33.105741       8 log.go:172] (0xc0024a7550) (0xc0023bb720) Stream removed, broadcasting: 5
I0107 14:45:33.105809       8 log.go:172] (0xc0024a7550) Go away received
I0107 14:45:33.105977       8 log.go:172] (0xc0024a7550) (0xc0023bb5e0) Stream removed, broadcasting: 1
I0107 14:45:33.105987       8 log.go:172] (0xc0024a7550) (0xc0030614a0) Stream removed, broadcasting: 3
I0107 14:45:33.106028       8 log.go:172] (0xc0024a7550) (0xc0023bb720) Stream removed, broadcasting: 5
Jan  7 14:45:33.106: INFO: Exec stderr: ""
Jan  7 14:45:33.106: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5219 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 14:45:33.106: INFO: >>> kubeConfig: /root/.kube/config
I0107 14:45:33.188392       8 log.go:172] (0xc0015f86e0) (0xc00293b400) Create stream
I0107 14:45:33.188570       8 log.go:172] (0xc0015f86e0) (0xc00293b400) Stream added, broadcasting: 1
I0107 14:45:33.193366       8 log.go:172] (0xc0015f86e0) Reply frame received for 1
I0107 14:45:33.193390       8 log.go:172] (0xc0015f86e0) (0xc003061540) Create stream
I0107 14:45:33.193398       8 log.go:172] (0xc0015f86e0) (0xc003061540) Stream added, broadcasting: 3
I0107 14:45:33.194934       8 log.go:172] (0xc0015f86e0) Reply frame received for 3
I0107 14:45:33.194954       8 log.go:172] (0xc0015f86e0) (0xc0023bbc20) Create stream
I0107 14:45:33.194962       8 log.go:172] (0xc0015f86e0) (0xc0023bbc20) Stream added, broadcasting: 5
I0107 14:45:33.196460       8 log.go:172] (0xc0015f86e0) Reply frame received for 5
I0107 14:45:33.287138       8 log.go:172] (0xc0015f86e0) Data frame received for 3
I0107 14:45:33.287280       8 log.go:172] (0xc003061540) (3) Data frame handling
I0107 14:45:33.287338       8 log.go:172] (0xc003061540) (3) Data frame sent
I0107 14:45:33.397290       8 log.go:172] (0xc0015f86e0) Data frame received for 1
I0107 14:45:33.397412       8 log.go:172] (0xc0015f86e0) (0xc003061540) Stream removed, broadcasting: 3
I0107 14:45:33.397461       8 log.go:172] (0xc00293b400) (1) Data frame handling
I0107 14:45:33.397487       8 log.go:172] (0xc00293b400) (1) Data frame sent
I0107 14:45:33.397516       8 log.go:172] (0xc0015f86e0) (0xc0023bbc20) Stream removed, broadcasting: 5
I0107 14:45:33.397532       8 log.go:172] (0xc0015f86e0) (0xc00293b400) Stream removed, broadcasting: 1
I0107 14:45:33.397546       8 log.go:172] (0xc0015f86e0) Go away received
I0107 14:45:33.397803       8 log.go:172] (0xc0015f86e0) (0xc00293b400) Stream removed, broadcasting: 1
I0107 14:45:33.397824       8 log.go:172] (0xc0015f86e0) (0xc003061540) Stream removed, broadcasting: 3
I0107 14:45:33.397831       8 log.go:172] (0xc0015f86e0) (0xc0023bbc20) Stream removed, broadcasting: 5
Jan  7 14:45:33.397: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan  7 14:45:33.398: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5219 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 14:45:33.398: INFO: >>> kubeConfig: /root/.kube/config
I0107 14:45:33.447072       8 log.go:172] (0xc002ac84d0) (0xc003061860) Create stream
I0107 14:45:33.447376       8 log.go:172] (0xc002ac84d0) (0xc003061860) Stream added, broadcasting: 1
I0107 14:45:33.454316       8 log.go:172] (0xc002ac84d0) Reply frame received for 1
I0107 14:45:33.454410       8 log.go:172] (0xc002ac84d0) (0xc0023bbcc0) Create stream
I0107 14:45:33.454445       8 log.go:172] (0xc002ac84d0) (0xc0023bbcc0) Stream added, broadcasting: 3
I0107 14:45:33.456770       8 log.go:172] (0xc002ac84d0) Reply frame received for 3
I0107 14:45:33.456937       8 log.go:172] (0xc002ac84d0) (0xc0027e6fa0) Create stream
I0107 14:45:33.456950       8 log.go:172] (0xc002ac84d0) (0xc0027e6fa0) Stream added, broadcasting: 5
I0107 14:45:33.457928       8 log.go:172] (0xc002ac84d0) Reply frame received for 5
I0107 14:45:33.550113       8 log.go:172] (0xc002ac84d0) Data frame received for 3
I0107 14:45:33.550236       8 log.go:172] (0xc0023bbcc0) (3) Data frame handling
I0107 14:45:33.550264       8 log.go:172] (0xc0023bbcc0) (3) Data frame sent
I0107 14:45:33.673215       8 log.go:172] (0xc002ac84d0) Data frame received for 1
I0107 14:45:33.673309       8 log.go:172] (0xc002ac84d0) (0xc0027e6fa0) Stream removed, broadcasting: 5
I0107 14:45:33.673363       8 log.go:172] (0xc003061860) (1) Data frame handling
I0107 14:45:33.673388       8 log.go:172] (0xc003061860) (1) Data frame sent
I0107 14:45:33.673409       8 log.go:172] (0xc002ac84d0) (0xc0023bbcc0) Stream removed, broadcasting: 3
I0107 14:45:33.673422       8 log.go:172] (0xc002ac84d0) (0xc003061860) Stream removed, broadcasting: 1
I0107 14:45:33.673436       8 log.go:172] (0xc002ac84d0) Go away received
I0107 14:45:33.673613       8 log.go:172] (0xc002ac84d0) (0xc003061860) Stream removed, broadcasting: 1
I0107 14:45:33.673626       8 log.go:172] (0xc002ac84d0) (0xc0023bbcc0) Stream removed, broadcasting: 3
I0107 14:45:33.673632       8 log.go:172] (0xc002ac84d0) (0xc0027e6fa0) Stream removed, broadcasting: 5
Jan  7 14:45:33.673: INFO: Exec stderr: ""
Jan  7 14:45:33.673: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5219 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 14:45:33.673: INFO: >>> kubeConfig: /root/.kube/config
I0107 14:45:33.731156       8 log.go:172] (0xc0015f98c0) (0xc00293b900) Create stream
I0107 14:45:33.731423       8 log.go:172] (0xc0015f98c0) (0xc00293b900) Stream added, broadcasting: 1
I0107 14:45:33.745498       8 log.go:172] (0xc0015f98c0) Reply frame received for 1
I0107 14:45:33.745588       8 log.go:172] (0xc0015f98c0) (0xc0023bbd60) Create stream
I0107 14:45:33.745605       8 log.go:172] (0xc0015f98c0) (0xc0023bbd60) Stream added, broadcasting: 3
I0107 14:45:33.747572       8 log.go:172] (0xc0015f98c0) Reply frame received for 3
I0107 14:45:33.747595       8 log.go:172] (0xc0015f98c0) (0xc003061900) Create stream
I0107 14:45:33.747606       8 log.go:172] (0xc0015f98c0) (0xc003061900) Stream added, broadcasting: 5
I0107 14:45:33.749491       8 log.go:172] (0xc0015f98c0) Reply frame received for 5
I0107 14:45:33.918077       8 log.go:172] (0xc0015f98c0) Data frame received for 3
I0107 14:45:33.918262       8 log.go:172] (0xc0023bbd60) (3) Data frame handling
I0107 14:45:33.918289       8 log.go:172] (0xc0023bbd60) (3) Data frame sent
I0107 14:45:34.090963       8 log.go:172] (0xc0015f98c0) (0xc003061900) Stream removed, broadcasting: 5
I0107 14:45:34.091067       8 log.go:172] (0xc0015f98c0) Data frame received for 1
I0107 14:45:34.091104       8 log.go:172] (0xc0015f98c0) (0xc0023bbd60) Stream removed, broadcasting: 3
I0107 14:45:34.091129       8 log.go:172] (0xc00293b900) (1) Data frame handling
I0107 14:45:34.091141       8 log.go:172] (0xc00293b900) (1) Data frame sent
I0107 14:45:34.091149       8 log.go:172] (0xc0015f98c0) (0xc00293b900) Stream removed, broadcasting: 1
I0107 14:45:34.091161       8 log.go:172] (0xc0015f98c0) Go away received
I0107 14:45:34.091414       8 log.go:172] (0xc0015f98c0) (0xc00293b900) Stream removed, broadcasting: 1
I0107 14:45:34.091430       8 log.go:172] (0xc0015f98c0) (0xc0023bbd60) Stream removed, broadcasting: 3
I0107 14:45:34.091437       8 log.go:172] (0xc0015f98c0) (0xc003061900) Stream removed, broadcasting: 5
Jan  7 14:45:34.091: INFO: Exec stderr: ""
Jan  7 14:45:34.091: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5219 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 14:45:34.091: INFO: >>> kubeConfig: /root/.kube/config
I0107 14:45:34.146250       8 log.go:172] (0xc001802d10) (0xc001226280) Create stream
I0107 14:45:34.146284       8 log.go:172] (0xc001802d10) (0xc001226280) Stream added, broadcasting: 1
I0107 14:45:34.152617       8 log.go:172] (0xc001802d10) Reply frame received for 1
I0107 14:45:34.152640       8 log.go:172] (0xc001802d10) (0xc0030619a0) Create stream
I0107 14:45:34.152648       8 log.go:172] (0xc001802d10) (0xc0030619a0) Stream added, broadcasting: 3
I0107 14:45:34.153947       8 log.go:172] (0xc001802d10) Reply frame received for 3
I0107 14:45:34.153968       8 log.go:172] (0xc001802d10) (0xc0012263c0) Create stream
I0107 14:45:34.153975       8 log.go:172] (0xc001802d10) (0xc0012263c0) Stream added, broadcasting: 5
I0107 14:45:34.155481       8 log.go:172] (0xc001802d10) Reply frame received for 5
I0107 14:45:34.283937       8 log.go:172] (0xc001802d10) Data frame received for 3
I0107 14:45:34.284008       8 log.go:172] (0xc0030619a0) (3) Data frame handling
I0107 14:45:34.284042       8 log.go:172] (0xc0030619a0) (3) Data frame sent
I0107 14:45:34.404657       8 log.go:172] (0xc001802d10) Data frame received for 1
I0107 14:45:34.404955       8 log.go:172] (0xc001802d10) (0xc0030619a0) Stream removed, broadcasting: 3
I0107 14:45:34.405188       8 log.go:172] (0xc001226280) (1) Data frame handling
I0107 14:45:34.405458       8 log.go:172] (0xc001226280) (1) Data frame sent
I0107 14:45:34.405737       8 log.go:172] (0xc001802d10) (0xc0012263c0) Stream removed, broadcasting: 5
I0107 14:45:34.405802       8 log.go:172] (0xc001802d10) (0xc001226280) Stream removed, broadcasting: 1
I0107 14:45:34.405840       8 log.go:172] (0xc001802d10) Go away received
I0107 14:45:34.406229       8 log.go:172] (0xc001802d10) (0xc001226280) Stream removed, broadcasting: 1
I0107 14:45:34.406251       8 log.go:172] (0xc001802d10) (0xc0030619a0) Stream removed, broadcasting: 3
I0107 14:45:34.406259       8 log.go:172] (0xc001802d10) (0xc0012263c0) Stream removed, broadcasting: 5
Jan  7 14:45:34.406: INFO: Exec stderr: ""
Jan  7 14:45:34.406: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5219 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 14:45:34.406: INFO: >>> kubeConfig: /root/.kube/config
I0107 14:45:34.482861       8 log.go:172] (0xc001083c30) (0xc0026e8000) Create stream
I0107 14:45:34.482985       8 log.go:172] (0xc001083c30) (0xc0026e8000) Stream added, broadcasting: 1
I0107 14:45:34.500428       8 log.go:172] (0xc001083c30) Reply frame received for 1
I0107 14:45:34.500510       8 log.go:172] (0xc001083c30) (0xc0023bbe00) Create stream
I0107 14:45:34.500534       8 log.go:172] (0xc001083c30) (0xc0023bbe00) Stream added, broadcasting: 3
I0107 14:45:34.502978       8 log.go:172] (0xc001083c30) Reply frame received for 3
I0107 14:45:34.503034       8 log.go:172] (0xc001083c30) (0xc0026e80a0) Create stream
I0107 14:45:34.503057       8 log.go:172] (0xc001083c30) (0xc0026e80a0) Stream added, broadcasting: 5
I0107 14:45:34.506340       8 log.go:172] (0xc001083c30) Reply frame received for 5
I0107 14:45:34.689497       8 log.go:172] (0xc001083c30) Data frame received for 3
I0107 14:45:34.689668       8 log.go:172] (0xc0023bbe00) (3) Data frame handling
I0107 14:45:34.689712       8 log.go:172] (0xc0023bbe00) (3) Data frame sent
I0107 14:45:34.828228       8 log.go:172] (0xc001083c30) (0xc0023bbe00) Stream removed, broadcasting: 3
I0107 14:45:34.828415       8 log.go:172] (0xc001083c30) Data frame received for 1
I0107 14:45:34.828448       8 log.go:172] (0xc0026e8000) (1) Data frame handling
I0107 14:45:34.828476       8 log.go:172] (0xc0026e8000) (1) Data frame sent
I0107 14:45:34.828491       8 log.go:172] (0xc001083c30) (0xc0026e80a0) Stream removed, broadcasting: 5
I0107 14:45:34.828605       8 log.go:172] (0xc001083c30) (0xc0026e8000) Stream removed, broadcasting: 1
I0107 14:45:34.828665       8 log.go:172] (0xc001083c30) Go away received
I0107 14:45:34.829163       8 log.go:172] (0xc001083c30) (0xc0026e8000) Stream removed, broadcasting: 1
I0107 14:45:34.829207       8 log.go:172] (0xc001083c30) (0xc0023bbe00) Stream removed, broadcasting: 3
I0107 14:45:34.829217       8 log.go:172] (0xc001083c30) (0xc0026e80a0) Stream removed, broadcasting: 5
Jan  7 14:45:34.829: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:45:34.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-5219" for this suite.
Jan  7 14:46:36.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:46:37.063: INFO: namespace e2e-kubelet-etc-hosts-5219 deletion completed in 1m2.22412344s

• [SLOW TEST:88.132 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:46:37.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-8995e0e6-720f-4e73-8e64-5aa8c549368a
STEP: Creating a pod to test consume secrets
Jan  7 14:46:37.166: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ed83f8c0-aed1-4c0f-a9e2-6777907eed36" in namespace "projected-1885" to be "success or failure"
Jan  7 14:46:37.181: INFO: Pod "pod-projected-secrets-ed83f8c0-aed1-4c0f-a9e2-6777907eed36": Phase="Pending", Reason="", readiness=false. Elapsed: 14.596478ms
Jan  7 14:46:39.189: INFO: Pod "pod-projected-secrets-ed83f8c0-aed1-4c0f-a9e2-6777907eed36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022971563s
Jan  7 14:46:41.199: INFO: Pod "pod-projected-secrets-ed83f8c0-aed1-4c0f-a9e2-6777907eed36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032331425s
Jan  7 14:46:43.210: INFO: Pod "pod-projected-secrets-ed83f8c0-aed1-4c0f-a9e2-6777907eed36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043369845s
Jan  7 14:46:45.217: INFO: Pod "pod-projected-secrets-ed83f8c0-aed1-4c0f-a9e2-6777907eed36": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050796647s
Jan  7 14:46:47.653: INFO: Pod "pod-projected-secrets-ed83f8c0-aed1-4c0f-a9e2-6777907eed36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.487118541s
STEP: Saw pod success
Jan  7 14:46:47.654: INFO: Pod "pod-projected-secrets-ed83f8c0-aed1-4c0f-a9e2-6777907eed36" satisfied condition "success or failure"
Jan  7 14:46:47.659: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-ed83f8c0-aed1-4c0f-a9e2-6777907eed36 container projected-secret-volume-test: 
STEP: delete the pod
Jan  7 14:46:47.739: INFO: Waiting for pod pod-projected-secrets-ed83f8c0-aed1-4c0f-a9e2-6777907eed36 to disappear
Jan  7 14:46:47.745: INFO: Pod pod-projected-secrets-ed83f8c0-aed1-4c0f-a9e2-6777907eed36 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:46:47.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1885" for this suite.
Jan  7 14:46:53.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:46:54.049: INFO: namespace projected-1885 deletion completed in 6.297813799s

• [SLOW TEST:16.984 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:46:54.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-8b15e71f-c512-4827-a6f9-46654ae51375
STEP: Creating a pod to test consume configMaps
Jan  7 14:46:54.168: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4767d060-ba5a-4a40-abb8-512d446e56be" in namespace "projected-319" to be "success or failure"
Jan  7 14:46:54.179: INFO: Pod "pod-projected-configmaps-4767d060-ba5a-4a40-abb8-512d446e56be": Phase="Pending", Reason="", readiness=false. Elapsed: 10.538263ms
Jan  7 14:46:56.195: INFO: Pod "pod-projected-configmaps-4767d060-ba5a-4a40-abb8-512d446e56be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026070155s
Jan  7 14:46:58.291: INFO: Pod "pod-projected-configmaps-4767d060-ba5a-4a40-abb8-512d446e56be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122084699s
Jan  7 14:47:00.299: INFO: Pod "pod-projected-configmaps-4767d060-ba5a-4a40-abb8-512d446e56be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129755365s
Jan  7 14:47:02.318: INFO: Pod "pod-projected-configmaps-4767d060-ba5a-4a40-abb8-512d446e56be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.14913766s
STEP: Saw pod success
Jan  7 14:47:02.318: INFO: Pod "pod-projected-configmaps-4767d060-ba5a-4a40-abb8-512d446e56be" satisfied condition "success or failure"
Jan  7 14:47:02.323: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-4767d060-ba5a-4a40-abb8-512d446e56be container projected-configmap-volume-test: 
STEP: delete the pod
Jan  7 14:47:02.382: INFO: Waiting for pod pod-projected-configmaps-4767d060-ba5a-4a40-abb8-512d446e56be to disappear
Jan  7 14:47:02.390: INFO: Pod pod-projected-configmaps-4767d060-ba5a-4a40-abb8-512d446e56be no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:47:02.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-319" for this suite.
Jan  7 14:47:08.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:47:08.531: INFO: namespace projected-319 deletion completed in 6.135456388s

• [SLOW TEST:14.482 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:47:08.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6810
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan  7 14:47:08.688: INFO: Found 0 stateful pods, waiting for 3
Jan  7 14:47:18.708: INFO: Found 2 stateful pods, waiting for 3
Jan  7 14:47:28.698: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 14:47:28.698: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 14:47:28.698: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  7 14:47:38.698: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 14:47:38.698: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 14:47:38.698: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 14:47:38.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6810 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  7 14:47:39.215: INFO: stderr: "I0107 14:47:38.928687    3095 log.go:172] (0xc000a26370) (0xc0009e66e0) Create stream\nI0107 14:47:38.928903    3095 log.go:172] (0xc000a26370) (0xc0009e66e0) Stream added, broadcasting: 1\nI0107 14:47:38.933345    3095 log.go:172] (0xc000a26370) Reply frame received for 1\nI0107 14:47:38.933502    3095 log.go:172] (0xc000a26370) (0xc00065a3c0) Create stream\nI0107 14:47:38.933530    3095 log.go:172] (0xc000a26370) (0xc00065a3c0) Stream added, broadcasting: 3\nI0107 14:47:38.935156    3095 log.go:172] (0xc000a26370) Reply frame received for 3\nI0107 14:47:38.935195    3095 log.go:172] (0xc000a26370) (0xc0009e6780) Create stream\nI0107 14:47:38.935204    3095 log.go:172] (0xc000a26370) (0xc0009e6780) Stream added, broadcasting: 5\nI0107 14:47:38.936282    3095 log.go:172] (0xc000a26370) Reply frame received for 5\nI0107 14:47:39.076245    3095 log.go:172] (0xc000a26370) Data frame received for 5\nI0107 14:47:39.076356    3095 log.go:172] (0xc0009e6780) (5) Data frame handling\nI0107 14:47:39.076378    3095 log.go:172] (0xc0009e6780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0107 14:47:39.145362    3095 log.go:172] (0xc000a26370) Data frame received for 3\nI0107 14:47:39.145391    3095 log.go:172] (0xc00065a3c0) (3) Data frame handling\nI0107 14:47:39.145408    3095 log.go:172] (0xc00065a3c0) (3) Data frame sent\nI0107 14:47:39.209517    3095 log.go:172] (0xc000a26370) Data frame received for 1\nI0107 14:47:39.209652    3095 log.go:172] (0xc000a26370) (0xc00065a3c0) Stream removed, broadcasting: 3\nI0107 14:47:39.209695    3095 log.go:172] (0xc0009e66e0) (1) Data frame handling\nI0107 14:47:39.209711    3095 log.go:172] (0xc0009e66e0) (1) Data frame sent\nI0107 14:47:39.209800    3095 log.go:172] (0xc000a26370) (0xc0009e6780) Stream removed, broadcasting: 5\nI0107 14:47:39.209844    3095 log.go:172] (0xc000a26370) (0xc0009e66e0) Stream removed, broadcasting: 1\nI0107 14:47:39.209864    3095 log.go:172] (0xc000a26370) Go away received\nI0107 14:47:39.210349    3095 log.go:172] (0xc000a26370) (0xc0009e66e0) Stream removed, broadcasting: 1\nI0107 14:47:39.210358    3095 log.go:172] (0xc000a26370) (0xc00065a3c0) Stream removed, broadcasting: 3\nI0107 14:47:39.210364    3095 log.go:172] (0xc000a26370) (0xc0009e6780) Stream removed, broadcasting: 5\n"
Jan  7 14:47:39.215: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  7 14:47:39.215: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  7 14:47:49.269: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan  7 14:47:59.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6810 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  7 14:47:59.794: INFO: stderr: "I0107 14:47:59.606293    3116 log.go:172] (0xc000116fd0) (0xc000586aa0) Create stream\nI0107 14:47:59.606811    3116 log.go:172] (0xc000116fd0) (0xc000586aa0) Stream added, broadcasting: 1\nI0107 14:47:59.614200    3116 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0107 14:47:59.614423    3116 log.go:172] (0xc000116fd0) (0xc000586320) Create stream\nI0107 14:47:59.614448    3116 log.go:172] (0xc000116fd0) (0xc000586320) Stream added, broadcasting: 3\nI0107 14:47:59.618703    3116 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0107 14:47:59.618848    3116 log.go:172] (0xc000116fd0) (0xc000426000) Create stream\nI0107 14:47:59.618895    3116 log.go:172] (0xc000116fd0) (0xc000426000) Stream added, broadcasting: 5\nI0107 14:47:59.620974    3116 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0107 14:47:59.710191    3116 log.go:172] (0xc000116fd0) Data frame received for 5\nI0107 14:47:59.710735    3116 log.go:172] (0xc000426000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0107 14:47:59.710822    3116 log.go:172] (0xc000116fd0) Data frame received for 3\nI0107 14:47:59.710899    3116 log.go:172] (0xc000586320) (3) Data frame handling\nI0107 14:47:59.710925    3116 log.go:172] (0xc000426000) (5) Data frame sent\nI0107 14:47:59.710954    3116 log.go:172] (0xc000586320) (3) Data frame sent\nI0107 14:47:59.787350    3116 log.go:172] (0xc000116fd0) Data frame received for 1\nI0107 14:47:59.787616    3116 log.go:172] (0xc000116fd0) (0xc000586320) Stream removed, broadcasting: 3\nI0107 14:47:59.787711    3116 log.go:172] (0xc000586aa0) (1) Data frame handling\nI0107 14:47:59.787745    3116 log.go:172] (0xc000586aa0) (1) Data frame sent\nI0107 14:47:59.787810    3116 log.go:172] (0xc000116fd0) (0xc000426000) Stream removed, broadcasting: 5\nI0107 14:47:59.787844    3116 log.go:172] (0xc000116fd0) (0xc000586aa0) Stream removed, broadcasting: 1\nI0107 14:47:59.787887    3116 log.go:172] (0xc000116fd0) Go away received\nI0107 14:47:59.788411    3116 log.go:172] (0xc000116fd0) (0xc000586aa0) Stream removed, broadcasting: 1\nI0107 14:47:59.788459    3116 log.go:172] (0xc000116fd0) (0xc000586320) Stream removed, broadcasting: 3\nI0107 14:47:59.788471    3116 log.go:172] (0xc000116fd0) (0xc000426000) Stream removed, broadcasting: 5\n"
Jan  7 14:47:59.794: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  7 14:47:59.794: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  7 14:48:09.885: INFO: Waiting for StatefulSet statefulset-6810/ss2 to complete update
Jan  7 14:48:09.886: INFO: Waiting for Pod statefulset-6810/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 14:48:09.886: INFO: Waiting for Pod statefulset-6810/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 14:48:09.886: INFO: Waiting for Pod statefulset-6810/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 14:48:19.914: INFO: Waiting for StatefulSet statefulset-6810/ss2 to complete update
Jan  7 14:48:19.915: INFO: Waiting for Pod statefulset-6810/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 14:48:19.915: INFO: Waiting for Pod statefulset-6810/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 14:48:29.901: INFO: Waiting for StatefulSet statefulset-6810/ss2 to complete update
Jan  7 14:48:29.901: INFO: Waiting for Pod statefulset-6810/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 14:48:29.901: INFO: Waiting for Pod statefulset-6810/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 14:48:40.003: INFO: Waiting for StatefulSet statefulset-6810/ss2 to complete update
Jan  7 14:48:40.003: INFO: Waiting for Pod statefulset-6810/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 14:48:49.906: INFO: Waiting for StatefulSet statefulset-6810/ss2 to complete update
Jan  7 14:48:49.907: INFO: Waiting for Pod statefulset-6810/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 14:48:59.901: INFO: Waiting for StatefulSet statefulset-6810/ss2 to complete update
STEP: Rolling back to a previous revision
Jan  7 14:49:09.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6810 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  7 14:49:10.306: INFO: stderr: "I0107 14:49:10.105343    3136 log.go:172] (0xc00013ac60) (0xc000322640) Create stream\nI0107 14:49:10.105560    3136 log.go:172] (0xc00013ac60) (0xc000322640) Stream added, broadcasting: 1\nI0107 14:49:10.110409    3136 log.go:172] (0xc00013ac60) Reply frame received for 1\nI0107 14:49:10.110494    3136 log.go:172] (0xc00013ac60) (0xc000406000) Create stream\nI0107 14:49:10.110509    3136 log.go:172] (0xc00013ac60) (0xc000406000) Stream added, broadcasting: 3\nI0107 14:49:10.111911    3136 log.go:172] (0xc00013ac60) Reply frame received for 3\nI0107 14:49:10.111936    3136 log.go:172] (0xc00013ac60) (0xc0003226e0) Create stream\nI0107 14:49:10.111943    3136 log.go:172] (0xc00013ac60) (0xc0003226e0) Stream added, broadcasting: 5\nI0107 14:49:10.112781    3136 log.go:172] (0xc00013ac60) Reply frame received for 5\nI0107 14:49:10.203255    3136 log.go:172] (0xc00013ac60) Data frame received for 5\nI0107 14:49:10.203300    3136 log.go:172] (0xc0003226e0) (5) Data frame handling\nI0107 14:49:10.203325    3136 log.go:172] (0xc0003226e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0107 14:49:10.237064    3136 log.go:172] (0xc00013ac60) Data frame received for 3\nI0107 14:49:10.237148    3136 log.go:172] (0xc000406000) (3) Data frame handling\nI0107 14:49:10.237169    3136 log.go:172] (0xc000406000) (3) Data frame sent\nI0107 14:49:10.298808    3136 log.go:172] (0xc00013ac60) (0xc000406000) Stream removed, broadcasting: 3\nI0107 14:49:10.298975    3136 log.go:172] (0xc00013ac60) Data frame received for 1\nI0107 14:49:10.299008    3136 log.go:172] (0xc00013ac60) (0xc0003226e0) Stream removed, broadcasting: 5\nI0107 14:49:10.299051    3136 log.go:172] (0xc000322640) (1) Data frame handling\nI0107 14:49:10.299074    3136 log.go:172] (0xc000322640) (1) Data frame sent\nI0107 14:49:10.299090    3136 log.go:172] (0xc00013ac60) (0xc000322640) Stream removed, broadcasting: 1\nI0107 14:49:10.299108    3136 log.go:172] (0xc00013ac60) Go away received\nI0107 14:49:10.299640    3136 log.go:172] (0xc00013ac60) (0xc000322640) Stream removed, broadcasting: 1\nI0107 14:49:10.299654    3136 log.go:172] (0xc00013ac60) (0xc000406000) Stream removed, broadcasting: 3\nI0107 14:49:10.299675    3136 log.go:172] (0xc00013ac60) (0xc0003226e0) Stream removed, broadcasting: 5\n"
Jan  7 14:49:10.307: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  7 14:49:10.307: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  7 14:49:20.357: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan  7 14:49:30.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6810 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  7 14:49:30.773: INFO: stderr: "I0107 14:49:30.625680    3154 log.go:172] (0xc0007e60b0) (0xc00079c6e0) Create stream\nI0107 14:49:30.625931    3154 log.go:172] (0xc0007e60b0) (0xc00079c6e0) Stream added, broadcasting: 1\nI0107 14:49:30.629118    3154 log.go:172] (0xc0007e60b0) Reply frame received for 1\nI0107 14:49:30.629151    3154 log.go:172] (0xc0007e60b0) (0xc000590320) Create stream\nI0107 14:49:30.629160    3154 log.go:172] (0xc0007e60b0) (0xc000590320) Stream added, broadcasting: 3\nI0107 14:49:30.630133    3154 log.go:172] (0xc0007e60b0) Reply frame received for 3\nI0107 14:49:30.630157    3154 log.go:172] (0xc0007e60b0) (0xc0002ee000) Create stream\nI0107 14:49:30.630165    3154 log.go:172] (0xc0007e60b0) (0xc0002ee000) Stream added, broadcasting: 5\nI0107 14:49:30.631267    3154 log.go:172] (0xc0007e60b0) Reply frame received for 5\nI0107 14:49:30.708237    3154 log.go:172] (0xc0007e60b0) Data frame received for 3\nI0107 14:49:30.708344    3154 log.go:172] (0xc000590320) (3) Data frame handling\nI0107 14:49:30.708376    3154 log.go:172] (0xc000590320) (3) Data frame sent\nI0107 14:49:30.708434    3154 log.go:172] (0xc0007e60b0) Data frame received for 5\nI0107 14:49:30.708454    3154 log.go:172] (0xc0002ee000) (5) Data frame handling\nI0107 14:49:30.708473    3154 log.go:172] (0xc0002ee000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0107 14:49:30.767517    3154 log.go:172] (0xc0007e60b0) Data frame received for 1\nI0107 14:49:30.767571    3154 log.go:172] (0xc00079c6e0) (1) Data frame handling\nI0107 14:49:30.767589    3154 log.go:172] (0xc00079c6e0) (1) Data frame sent\nI0107 14:49:30.767615    3154 log.go:172] (0xc0007e60b0) (0xc00079c6e0) Stream removed, broadcasting: 1\nI0107 14:49:30.767921    3154 log.go:172] (0xc0007e60b0) (0xc000590320) Stream removed, broadcasting: 3\nI0107 14:49:30.767988    3154 log.go:172] (0xc0007e60b0) (0xc0002ee000) Stream removed, broadcasting: 5\nI0107 14:49:30.768035    3154 log.go:172] (0xc0007e60b0) Go away received\nI0107 14:49:30.768473    3154 log.go:172] (0xc0007e60b0) (0xc00079c6e0) Stream removed, broadcasting: 1\nI0107 14:49:30.768524    3154 log.go:172] (0xc0007e60b0) (0xc000590320) Stream removed, broadcasting: 3\nI0107 14:49:30.768539    3154 log.go:172] (0xc0007e60b0) (0xc0002ee000) Stream removed, broadcasting: 5\n"
Jan  7 14:49:30.773: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  7 14:49:30.773: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  7 14:49:40.817: INFO: Waiting for StatefulSet statefulset-6810/ss2 to complete update
Jan  7 14:49:40.818: INFO: Waiting for Pod statefulset-6810/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  7 14:49:40.818: INFO: Waiting for Pod statefulset-6810/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  7 14:49:50.834: INFO: Waiting for StatefulSet statefulset-6810/ss2 to complete update
Jan  7 14:49:50.834: INFO: Waiting for Pod statefulset-6810/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  7 14:49:50.834: INFO: Waiting for Pod statefulset-6810/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  7 14:50:00.835: INFO: Waiting for StatefulSet statefulset-6810/ss2 to complete update
Jan  7 14:50:00.835: INFO: Waiting for Pod statefulset-6810/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  7 14:50:10.840: INFO: Waiting for StatefulSet statefulset-6810/ss2 to complete update
Jan  7 14:50:10.840: INFO: Waiting for Pod statefulset-6810/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  7 14:50:20.841: INFO: Deleting all statefulset in ns statefulset-6810
Jan  7 14:50:20.846: INFO: Scaling statefulset ss2 to 0
Jan  7 14:51:00.905: INFO: Waiting for statefulset status.replicas updated to 0
Jan  7 14:51:00.910: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:51:00.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6810" for this suite.
Jan  7 14:51:09.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:51:09.110: INFO: namespace statefulset-6810 deletion completed in 8.154429939s

• [SLOW TEST:240.579 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:51:09.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-d43f9f1b-42c6-4c41-be45-df9ccb9186d8
STEP: Creating configMap with name cm-test-opt-upd-acdae888-f523-4fbf-ab64-77877de74141
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-d43f9f1b-42c6-4c41-be45-df9ccb9186d8
STEP: Updating configmap cm-test-opt-upd-acdae888-f523-4fbf-ab64-77877de74141
STEP: Creating configMap with name cm-test-opt-create-6b2a42e3-541a-45c2-a18c-82554b582733
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:51:25.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3282" for this suite.
Jan  7 14:51:49.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:51:49.674: INFO: namespace configmap-3282 deletion completed in 24.161106994s

• [SLOW TEST:40.563 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:51:49.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  7 14:51:49.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2939'
Jan  7 14:51:50.027: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  7 14:51:50.027: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Jan  7 14:51:52.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-2939'
Jan  7 14:51:52.323: INFO: stderr: ""
Jan  7 14:51:52.323: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:51:52.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2939" for this suite.
Jan  7 14:51:58.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:51:58.540: INFO: namespace kubectl-2939 deletion completed in 6.207358461s

• [SLOW TEST:8.866 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:51:58.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-00391b42-febb-4d42-b2c0-0a7e490a4e3d
STEP: Creating a pod to test consume configMaps
Jan  7 14:51:58.685: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d27b4b37-78c5-4b88-a626-1eef305fbdc4" in namespace "projected-9243" to be "success or failure"
Jan  7 14:51:58.744: INFO: Pod "pod-projected-configmaps-d27b4b37-78c5-4b88-a626-1eef305fbdc4": Phase="Pending", Reason="", readiness=false. Elapsed: 58.493987ms
Jan  7 14:52:00.761: INFO: Pod "pod-projected-configmaps-d27b4b37-78c5-4b88-a626-1eef305fbdc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075868274s
Jan  7 14:52:02.773: INFO: Pod "pod-projected-configmaps-d27b4b37-78c5-4b88-a626-1eef305fbdc4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087678567s
Jan  7 14:52:04.786: INFO: Pod "pod-projected-configmaps-d27b4b37-78c5-4b88-a626-1eef305fbdc4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100146375s
Jan  7 14:52:06.795: INFO: Pod "pod-projected-configmaps-d27b4b37-78c5-4b88-a626-1eef305fbdc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.109165361s
STEP: Saw pod success
Jan  7 14:52:06.795: INFO: Pod "pod-projected-configmaps-d27b4b37-78c5-4b88-a626-1eef305fbdc4" satisfied condition "success or failure"
Jan  7 14:52:06.799: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-d27b4b37-78c5-4b88-a626-1eef305fbdc4 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  7 14:52:06.862: INFO: Waiting for pod pod-projected-configmaps-d27b4b37-78c5-4b88-a626-1eef305fbdc4 to disappear
Jan  7 14:52:06.871: INFO: Pod pod-projected-configmaps-d27b4b37-78c5-4b88-a626-1eef305fbdc4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:52:06.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9243" for this suite.
Jan  7 14:52:12.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:52:13.024: INFO: namespace projected-9243 deletion completed in 6.143052988s

• [SLOW TEST:14.484 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:52:13.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-e5b63dc2-c511-4dbd-80a9-47640ae51ea5
STEP: Creating a pod to test consume secrets
Jan  7 14:52:13.196: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fae72fb8-3fd4-4d4c-bf4c-4f08ac602a89" in namespace "projected-2768" to be "success or failure"
Jan  7 14:52:13.216: INFO: Pod "pod-projected-secrets-fae72fb8-3fd4-4d4c-bf4c-4f08ac602a89": Phase="Pending", Reason="", readiness=false. Elapsed: 19.235308ms
Jan  7 14:52:15.223: INFO: Pod "pod-projected-secrets-fae72fb8-3fd4-4d4c-bf4c-4f08ac602a89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02687606s
Jan  7 14:52:17.236: INFO: Pod "pod-projected-secrets-fae72fb8-3fd4-4d4c-bf4c-4f08ac602a89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039996769s
Jan  7 14:52:19.244: INFO: Pod "pod-projected-secrets-fae72fb8-3fd4-4d4c-bf4c-4f08ac602a89": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047244952s
Jan  7 14:52:21.259: INFO: Pod "pod-projected-secrets-fae72fb8-3fd4-4d4c-bf4c-4f08ac602a89": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062634719s
Jan  7 14:52:23.267: INFO: Pod "pod-projected-secrets-fae72fb8-3fd4-4d4c-bf4c-4f08ac602a89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070734881s
STEP: Saw pod success
Jan  7 14:52:23.267: INFO: Pod "pod-projected-secrets-fae72fb8-3fd4-4d4c-bf4c-4f08ac602a89" satisfied condition "success or failure"
Jan  7 14:52:23.273: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-fae72fb8-3fd4-4d4c-bf4c-4f08ac602a89 container projected-secret-volume-test: 
STEP: delete the pod
Jan  7 14:52:23.317: INFO: Waiting for pod pod-projected-secrets-fae72fb8-3fd4-4d4c-bf4c-4f08ac602a89 to disappear
Jan  7 14:52:23.325: INFO: Pod pod-projected-secrets-fae72fb8-3fd4-4d4c-bf4c-4f08ac602a89 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:52:23.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2768" for this suite.
Jan  7 14:52:29.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:52:29.654: INFO: namespace projected-2768 deletion completed in 6.32491916s

• [SLOW TEST:16.629 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:52:29.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  7 14:52:29.778: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59a887d6-2efb-4b82-a4f7-81a5d2126e42" in namespace "projected-732" to be "success or failure"
Jan  7 14:52:29.809: INFO: Pod "downwardapi-volume-59a887d6-2efb-4b82-a4f7-81a5d2126e42": Phase="Pending", Reason="", readiness=false. Elapsed: 31.157783ms
Jan  7 14:52:31.816: INFO: Pod "downwardapi-volume-59a887d6-2efb-4b82-a4f7-81a5d2126e42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037883274s
Jan  7 14:52:33.831: INFO: Pod "downwardapi-volume-59a887d6-2efb-4b82-a4f7-81a5d2126e42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052683879s
Jan  7 14:52:35.842: INFO: Pod "downwardapi-volume-59a887d6-2efb-4b82-a4f7-81a5d2126e42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063670313s
Jan  7 14:52:37.894: INFO: Pod "downwardapi-volume-59a887d6-2efb-4b82-a4f7-81a5d2126e42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.115652181s
STEP: Saw pod success
Jan  7 14:52:37.894: INFO: Pod "downwardapi-volume-59a887d6-2efb-4b82-a4f7-81a5d2126e42" satisfied condition "success or failure"
Jan  7 14:52:37.989: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-59a887d6-2efb-4b82-a4f7-81a5d2126e42 container client-container: 
STEP: delete the pod
Jan  7 14:52:38.666: INFO: Waiting for pod downwardapi-volume-59a887d6-2efb-4b82-a4f7-81a5d2126e42 to disappear
Jan  7 14:52:38.677: INFO: Pod downwardapi-volume-59a887d6-2efb-4b82-a4f7-81a5d2126e42 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:52:38.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-732" for this suite.
Jan  7 14:52:44.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:52:44.811: INFO: namespace projected-732 deletion completed in 6.128237319s

• [SLOW TEST:15.156 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:52:44.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  7 14:52:44.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9512'
Jan  7 14:52:45.052: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  7 14:52:45.053: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan  7 14:52:45.076: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan  7 14:52:45.144: INFO: scanned /root for discovery docs: 
Jan  7 14:52:45.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9512'
Jan  7 14:53:08.312: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  7 14:53:08.313: INFO: stdout: "Created e2e-test-nginx-rc-fdcfa9350e95d8b6fb4be8b851030fa4\nScaling up e2e-test-nginx-rc-fdcfa9350e95d8b6fb4be8b851030fa4 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-fdcfa9350e95d8b6fb4be8b851030fa4 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-fdcfa9350e95d8b6fb4be8b851030fa4 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan  7 14:53:08.313: INFO: stdout: "Created e2e-test-nginx-rc-fdcfa9350e95d8b6fb4be8b851030fa4\nScaling up e2e-test-nginx-rc-fdcfa9350e95d8b6fb4be8b851030fa4 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-fdcfa9350e95d8b6fb4be8b851030fa4 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-fdcfa9350e95d8b6fb4be8b851030fa4 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan  7 14:53:08.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-9512'
Jan  7 14:53:08.524: INFO: stderr: ""
Jan  7 14:53:08.524: INFO: stdout: "e2e-test-nginx-rc-fdcfa9350e95d8b6fb4be8b851030fa4-gtwb5 "
Jan  7 14:53:08.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-fdcfa9350e95d8b6fb4be8b851030fa4-gtwb5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9512'
Jan  7 14:53:08.627: INFO: stderr: ""
Jan  7 14:53:08.628: INFO: stdout: "true"
Jan  7 14:53:08.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-fdcfa9350e95d8b6fb4be8b851030fa4-gtwb5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9512'
Jan  7 14:53:08.740: INFO: stderr: ""
Jan  7 14:53:08.740: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan  7 14:53:08.741: INFO: e2e-test-nginx-rc-fdcfa9350e95d8b6fb4be8b851030fa4-gtwb5 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Jan  7 14:53:08.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9512'
Jan  7 14:53:08.863: INFO: stderr: ""
Jan  7 14:53:08.863: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:53:08.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9512" for this suite.
Jan  7 14:53:30.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:53:31.066: INFO: namespace kubectl-9512 deletion completed in 22.188184672s

• [SLOW TEST:46.255 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:53:31.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  7 14:56:33.894: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 14:56:33.937: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 14:56:35.937: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 14:56:35.953: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 14:56:37.937: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 14:56:37.945: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 14:56:39.937: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 14:56:39.944: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 14:56:41.937: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 14:56:41.949: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 14:56:43.937: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 14:56:43.946: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 14:56:45.937: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 14:56:45.950: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 14:56:47.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 14:56:47.957: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 14:56:49.937: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 14:56:49.950: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 14:56:51.937: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 14:56:51.965: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 14:56:53.937: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 14:56:53.955: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 14:56:55.937: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 14:56:55.947: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 14:56:57.937: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 14:56:57.947: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 14:56:59.937: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 14:56:59.946: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 14:57:01.937: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 14:57:01.944: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 14:57:03.937: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 14:57:03.950: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 14:57:05.937: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 14:57:05.953: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 14:57:07.937: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 14:57:07.954: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:57:07.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3805" for this suite.
Jan  7 14:57:31.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:57:32.155: INFO: namespace container-lifecycle-hook-3805 deletion completed in 24.189922354s

• [SLOW TEST:241.087 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:57:32.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-94dc976f-41b0-4865-bb19-7b54ad924af2
STEP: Creating a pod to test consume secrets
Jan  7 14:57:32.296: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-896375f7-3019-4aa7-a0d9-6c0ff6cd01b2" in namespace "projected-7535" to be "success or failure"
Jan  7 14:57:32.306: INFO: Pod "pod-projected-secrets-896375f7-3019-4aa7-a0d9-6c0ff6cd01b2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.631566ms
Jan  7 14:57:34.315: INFO: Pod "pod-projected-secrets-896375f7-3019-4aa7-a0d9-6c0ff6cd01b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018494165s
Jan  7 14:57:36.324: INFO: Pod "pod-projected-secrets-896375f7-3019-4aa7-a0d9-6c0ff6cd01b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02751403s
Jan  7 14:57:38.333: INFO: Pod "pod-projected-secrets-896375f7-3019-4aa7-a0d9-6c0ff6cd01b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036866962s
Jan  7 14:57:40.347: INFO: Pod "pod-projected-secrets-896375f7-3019-4aa7-a0d9-6c0ff6cd01b2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051057606s
Jan  7 14:57:42.357: INFO: Pod "pod-projected-secrets-896375f7-3019-4aa7-a0d9-6c0ff6cd01b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06059995s
STEP: Saw pod success
Jan  7 14:57:42.357: INFO: Pod "pod-projected-secrets-896375f7-3019-4aa7-a0d9-6c0ff6cd01b2" satisfied condition "success or failure"
Jan  7 14:57:42.361: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-896375f7-3019-4aa7-a0d9-6c0ff6cd01b2 container secret-volume-test: 
STEP: delete the pod
Jan  7 14:57:42.645: INFO: Waiting for pod pod-projected-secrets-896375f7-3019-4aa7-a0d9-6c0ff6cd01b2 to disappear
Jan  7 14:57:42.686: INFO: Pod pod-projected-secrets-896375f7-3019-4aa7-a0d9-6c0ff6cd01b2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:57:42.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7535" for this suite.
Jan  7 14:57:48.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:57:48.848: INFO: namespace projected-7535 deletion completed in 6.155682932s

• [SLOW TEST:16.693 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:57:48.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  7 14:58:07.096: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  7 14:58:07.111: INFO: Pod pod-with-poststart-http-hook still exists
Jan  7 14:58:09.111: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  7 14:58:09.118: INFO: Pod pod-with-poststart-http-hook still exists
Jan  7 14:58:11.112: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  7 14:58:11.122: INFO: Pod pod-with-poststart-http-hook still exists
Jan  7 14:58:13.112: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  7 14:58:13.123: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:58:13.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7840" for this suite.
Jan  7 14:58:35.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:58:35.256: INFO: namespace container-lifecycle-hook-7840 deletion completed in 22.125298009s

• [SLOW TEST:46.408 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:58:35.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  7 14:58:35.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-6901'
Jan  7 14:58:37.744: INFO: stderr: ""
Jan  7 14:58:37.745: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan  7 14:58:47.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-6901 -o json'
Jan  7 14:58:48.024: INFO: stderr: ""
Jan  7 14:58:48.024: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-07T14:58:37Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-6901\",\n        \"resourceVersion\": \"19665065\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-6901/pods/e2e-test-nginx-pod\",\n        \"uid\": \"2408ffff-6a73-4f7b-873f-c023ae5fc7ab\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-phc6f\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-phc6f\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-phc6f\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-07T14:58:37Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-07T14:58:45Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-07T14:58:45Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-07T14:58:37Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://860daf3e1f12e8349482e732914051643c06f2be5cd837aff865b68cf539f7a5\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-07T14:58:44Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-07T14:58:37Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan  7 14:58:48.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6901'
Jan  7 14:58:48.368: INFO: stderr: ""
Jan  7 14:58:48.369: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Jan  7 14:58:48.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6901'
Jan  7 14:58:56.414: INFO: stderr: ""
Jan  7 14:58:56.415: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:58:56.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6901" for this suite.
Jan  7 14:59:02.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:59:02.659: INFO: namespace kubectl-6901 deletion completed in 6.225633953s

• [SLOW TEST:27.403 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:59:02.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  7 14:59:02.775: INFO: Waiting up to 5m0s for pod "downwardapi-volume-774b760e-fba6-4bca-8772-738aef98dbf5" in namespace "downward-api-6353" to be "success or failure"
Jan  7 14:59:02.792: INFO: Pod "downwardapi-volume-774b760e-fba6-4bca-8772-738aef98dbf5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.520533ms
Jan  7 14:59:04.802: INFO: Pod "downwardapi-volume-774b760e-fba6-4bca-8772-738aef98dbf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02699384s
Jan  7 14:59:06.815: INFO: Pod "downwardapi-volume-774b760e-fba6-4bca-8772-738aef98dbf5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03989508s
Jan  7 14:59:08.832: INFO: Pod "downwardapi-volume-774b760e-fba6-4bca-8772-738aef98dbf5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057479654s
Jan  7 14:59:10.848: INFO: Pod "downwardapi-volume-774b760e-fba6-4bca-8772-738aef98dbf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073244601s
STEP: Saw pod success
Jan  7 14:59:10.848: INFO: Pod "downwardapi-volume-774b760e-fba6-4bca-8772-738aef98dbf5" satisfied condition "success or failure"
Jan  7 14:59:10.855: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-774b760e-fba6-4bca-8772-738aef98dbf5 container client-container: 
STEP: delete the pod
Jan  7 14:59:11.009: INFO: Waiting for pod downwardapi-volume-774b760e-fba6-4bca-8772-738aef98dbf5 to disappear
Jan  7 14:59:11.017: INFO: Pod downwardapi-volume-774b760e-fba6-4bca-8772-738aef98dbf5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:59:11.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6353" for this suite.
Jan  7 14:59:17.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:59:17.203: INFO: namespace downward-api-6353 deletion completed in 6.177122502s

• [SLOW TEST:14.542 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:59:17.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan  7 14:59:17.319: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3740,SelfLink:/api/v1/namespaces/watch-3740/configmaps/e2e-watch-test-watch-closed,UID:43b220f0-8325-4724-8650-4a185ffa262b,ResourceVersion:19665158,Generation:0,CreationTimestamp:2020-01-07 14:59:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  7 14:59:17.320: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3740,SelfLink:/api/v1/namespaces/watch-3740/configmaps/e2e-watch-test-watch-closed,UID:43b220f0-8325-4724-8650-4a185ffa262b,ResourceVersion:19665159,Generation:0,CreationTimestamp:2020-01-07 14:59:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan  7 14:59:17.355: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3740,SelfLink:/api/v1/namespaces/watch-3740/configmaps/e2e-watch-test-watch-closed,UID:43b220f0-8325-4724-8650-4a185ffa262b,ResourceVersion:19665160,Generation:0,CreationTimestamp:2020-01-07 14:59:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  7 14:59:17.355: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3740,SelfLink:/api/v1/namespaces/watch-3740/configmaps/e2e-watch-test-watch-closed,UID:43b220f0-8325-4724-8650-4a185ffa262b,ResourceVersion:19665161,Generation:0,CreationTimestamp:2020-01-07 14:59:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:59:17.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3740" for this suite.
Jan  7 14:59:23.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:59:23.514: INFO: namespace watch-3740 deletion completed in 6.153934432s

• [SLOW TEST:6.310 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:59:23.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:59:30.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4309" for this suite.
Jan  7 14:59:36.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:59:36.139: INFO: namespace namespaces-4309 deletion completed in 6.129684768s
STEP: Destroying namespace "nsdeletetest-2523" for this suite.
Jan  7 14:59:36.142: INFO: Namespace nsdeletetest-2523 was already deleted
STEP: Destroying namespace "nsdeletetest-3406" for this suite.
Jan  7 14:59:42.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 14:59:42.336: INFO: namespace nsdeletetest-3406 deletion completed in 6.19393972s

• [SLOW TEST:18.822 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 14:59:42.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan  7 14:59:48.746: INFO: 0 pods remaining
Jan  7 14:59:48.747: INFO: 0 pods has nil DeletionTimestamp
Jan  7 14:59:48.747: INFO: 
STEP: Gathering metrics
W0107 14:59:49.524697       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  7 14:59:49.524: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 14:59:49.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8469" for this suite.
Jan  7 15:00:01.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 15:00:01.971: INFO: namespace gc-8469 deletion completed in 12.265563927s

• [SLOW TEST:19.636 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 15:00:01.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-9k9vc in namespace proxy-6276
I0107 15:00:02.177257       8 runners.go:180] Created replication controller with name: proxy-service-9k9vc, namespace: proxy-6276, replica count: 1
I0107 15:00:03.228568       8 runners.go:180] proxy-service-9k9vc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 15:00:04.229430       8 runners.go:180] proxy-service-9k9vc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 15:00:05.230063       8 runners.go:180] proxy-service-9k9vc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 15:00:06.230526       8 runners.go:180] proxy-service-9k9vc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 15:00:07.230966       8 runners.go:180] proxy-service-9k9vc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 15:00:08.231413       8 runners.go:180] proxy-service-9k9vc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 15:00:09.231890       8 runners.go:180] proxy-service-9k9vc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 15:00:10.232324       8 runners.go:180] proxy-service-9k9vc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 15:00:11.232735       8 runners.go:180] proxy-service-9k9vc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 15:00:12.233375       8 runners.go:180] proxy-service-9k9vc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0107 15:00:13.233938       8 runners.go:180] proxy-service-9k9vc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0107 15:00:14.234660       8 runners.go:180] proxy-service-9k9vc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  7 15:00:14.242: INFO: setup took 12.1803628s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan  7 15:00:14.267: INFO: (0) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 24.753666ms)
Jan  7 15:00:14.267: INFO: (0) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:1080/proxy/: test<... (200; 24.687913ms)
Jan  7 15:00:14.267: INFO: (0) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 25.004018ms)
Jan  7 15:00:14.267: INFO: (0) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname1/proxy/: foo (200; 24.857895ms)
Jan  7 15:00:14.267: INFO: (0) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname1/proxy/: foo (200; 24.623617ms)
Jan  7 15:00:14.267: INFO: (0) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88/proxy/: test (200; 25.116223ms)
Jan  7 15:00:14.268: INFO: (0) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname2/proxy/: bar (200; 26.161397ms)
Jan  7 15:00:14.268: INFO: (0) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:1080/proxy/: ... (200; 25.981339ms)
Jan  7 15:00:14.268: INFO: (0) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 26.120423ms)
Jan  7 15:00:14.268: INFO: (0) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname2/proxy/: bar (200; 26.12007ms)
Jan  7 15:00:14.268: INFO: (0) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 26.100755ms)
Jan  7 15:00:14.282: INFO: (0) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:443/proxy/: ... (200; 13.238497ms)
Jan  7 15:00:14.296: INFO: (1) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:443/proxy/: test (200; 13.555307ms)
Jan  7 15:00:14.301: INFO: (1) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname1/proxy/: foo (200; 17.92529ms)
Jan  7 15:00:14.301: INFO: (1) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:1080/proxy/: test<... (200; 18.237678ms)
Jan  7 15:00:14.301: INFO: (1) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 18.323092ms)
Jan  7 15:00:14.302: INFO: (1) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname1/proxy/: tls baz (200; 19.506753ms)
Jan  7 15:00:14.302: INFO: (1) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname2/proxy/: bar (200; 19.722597ms)
Jan  7 15:00:14.303: INFO: (1) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname1/proxy/: foo (200; 19.606041ms)
Jan  7 15:00:14.303: INFO: (1) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname2/proxy/: bar (200; 19.784808ms)
Jan  7 15:00:14.303: INFO: (1) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname2/proxy/: tls qux (200; 20.086833ms)
Jan  7 15:00:14.311: INFO: (2) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:1080/proxy/: ... (200; 7.667102ms)
Jan  7 15:00:14.312: INFO: (2) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 9.041795ms)
Jan  7 15:00:14.313: INFO: (2) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:460/proxy/: tls baz (200; 9.179235ms)
Jan  7 15:00:14.313: INFO: (2) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:443/proxy/: test<... (200; 10.225652ms)
Jan  7 15:00:14.314: INFO: (2) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 10.690299ms)
Jan  7 15:00:14.315: INFO: (2) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 11.205352ms)
Jan  7 15:00:14.315: INFO: (2) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 11.062854ms)
Jan  7 15:00:14.315: INFO: (2) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname2/proxy/: bar (200; 11.459437ms)
Jan  7 15:00:14.317: INFO: (2) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88/proxy/: test (200; 12.978446ms)
Jan  7 15:00:14.317: INFO: (2) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname2/proxy/: bar (200; 14.044072ms)
Jan  7 15:00:14.318: INFO: (2) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname1/proxy/: foo (200; 14.302118ms)
Jan  7 15:00:14.319: INFO: (2) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname1/proxy/: foo (200; 15.910638ms)
Jan  7 15:00:14.320: INFO: (2) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname1/proxy/: tls baz (200; 17.26082ms)
Jan  7 15:00:14.321: INFO: (2) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname2/proxy/: tls qux (200; 17.253952ms)
Jan  7 15:00:14.328: INFO: (3) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 7.881722ms)
Jan  7 15:00:14.330: INFO: (3) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 9.710087ms)
Jan  7 15:00:14.331: INFO: (3) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:1080/proxy/: ... (200; 9.961465ms)
Jan  7 15:00:14.332: INFO: (3) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:443/proxy/: test (200; 12.055671ms)
Jan  7 15:00:14.333: INFO: (3) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname2/proxy/: tls qux (200; 12.12333ms)
Jan  7 15:00:14.333: INFO: (3) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:1080/proxy/: test<... (200; 12.158282ms)
Jan  7 15:00:14.334: INFO: (3) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:460/proxy/: tls baz (200; 12.845482ms)
Jan  7 15:00:14.335: INFO: (3) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname1/proxy/: foo (200; 14.461887ms)
Jan  7 15:00:14.336: INFO: (3) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname1/proxy/: tls baz (200; 15.037684ms)
Jan  7 15:00:14.336: INFO: (3) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname1/proxy/: foo (200; 15.177852ms)
Jan  7 15:00:14.337: INFO: (3) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname2/proxy/: bar (200; 16.234646ms)
Jan  7 15:00:14.345: INFO: (4) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:460/proxy/: tls baz (200; 7.799365ms)
Jan  7 15:00:14.345: INFO: (4) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 7.940997ms)
Jan  7 15:00:14.348: INFO: (4) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:1080/proxy/: ... (200; 11.001454ms)
Jan  7 15:00:14.349: INFO: (4) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:462/proxy/: tls qux (200; 12.254602ms)
Jan  7 15:00:14.350: INFO: (4) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 12.833118ms)
Jan  7 15:00:14.350: INFO: (4) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 12.854743ms)
Jan  7 15:00:14.350: INFO: (4) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 12.918853ms)
Jan  7 15:00:14.350: INFO: (4) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname1/proxy/: tls baz (200; 13.360253ms)
Jan  7 15:00:14.350: INFO: (4) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:443/proxy/: test<... (200; 13.599012ms)
Jan  7 15:00:14.351: INFO: (4) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88/proxy/: test (200; 13.547667ms)
Jan  7 15:00:14.355: INFO: (4) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname1/proxy/: foo (200; 17.602393ms)
Jan  7 15:00:14.355: INFO: (4) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname1/proxy/: foo (200; 17.995406ms)
Jan  7 15:00:14.355: INFO: (4) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname2/proxy/: tls qux (200; 18.110689ms)
Jan  7 15:00:14.355: INFO: (4) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname2/proxy/: bar (200; 18.228914ms)
Jan  7 15:00:14.356: INFO: (4) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname2/proxy/: bar (200; 18.91544ms)
Jan  7 15:00:14.360: INFO: (5) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 3.730551ms)
Jan  7 15:00:14.361: INFO: (5) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:1080/proxy/: ... (200; 5.363332ms)
Jan  7 15:00:14.362: INFO: (5) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 5.905514ms)
Jan  7 15:00:14.362: INFO: (5) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:1080/proxy/: test<... (200; 5.921557ms)
Jan  7 15:00:14.362: INFO: (5) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 6.008408ms)
Jan  7 15:00:14.365: INFO: (5) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:462/proxy/: tls qux (200; 8.641897ms)
Jan  7 15:00:14.365: INFO: (5) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 8.982681ms)
Jan  7 15:00:14.365: INFO: (5) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88/proxy/: test (200; 9.226106ms)
Jan  7 15:00:14.366: INFO: (5) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname2/proxy/: bar (200; 9.640658ms)
Jan  7 15:00:14.366: INFO: (5) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname2/proxy/: bar (200; 9.973685ms)
Jan  7 15:00:14.366: INFO: (5) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:443/proxy/: ... (200; 7.633557ms)
Jan  7 15:00:14.379: INFO: (6) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 7.857398ms)
Jan  7 15:00:14.379: INFO: (6) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88/proxy/: test (200; 7.795976ms)
Jan  7 15:00:14.379: INFO: (6) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 8.216074ms)
Jan  7 15:00:14.382: INFO: (6) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:1080/proxy/: test<... (200; 11.543611ms)
Jan  7 15:00:14.383: INFO: (6) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 11.928566ms)
Jan  7 15:00:14.383: INFO: (6) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 12.061676ms)
Jan  7 15:00:14.383: INFO: (6) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:460/proxy/: tls baz (200; 12.420086ms)
Jan  7 15:00:14.383: INFO: (6) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:462/proxy/: tls qux (200; 12.366406ms)
Jan  7 15:00:14.383: INFO: (6) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:443/proxy/: test<... (200; 17.52616ms)
Jan  7 15:00:14.405: INFO: (7) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:1080/proxy/: ... (200; 17.758668ms)
Jan  7 15:00:14.405: INFO: (7) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:462/proxy/: tls qux (200; 18.429273ms)
Jan  7 15:00:14.405: INFO: (7) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88/proxy/: test (200; 18.317172ms)
Jan  7 15:00:14.405: INFO: (7) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 18.530433ms)
Jan  7 15:00:14.406: INFO: (7) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 19.117201ms)
Jan  7 15:00:14.406: INFO: (7) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname1/proxy/: tls baz (200; 19.295847ms)
Jan  7 15:00:14.406: INFO: (7) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname2/proxy/: tls qux (200; 19.614493ms)
Jan  7 15:00:14.407: INFO: (7) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname2/proxy/: bar (200; 20.123359ms)
Jan  7 15:00:14.407: INFO: (7) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname2/proxy/: bar (200; 19.975707ms)
Jan  7 15:00:14.407: INFO: (7) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:460/proxy/: tls baz (200; 20.24692ms)
Jan  7 15:00:14.407: INFO: (7) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 20.396312ms)
Jan  7 15:00:14.407: INFO: (7) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname1/proxy/: foo (200; 20.724375ms)
Jan  7 15:00:14.415: INFO: (8) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:1080/proxy/: ... (200; 7.549759ms)
Jan  7 15:00:14.416: INFO: (8) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:462/proxy/: tls qux (200; 7.801271ms)
Jan  7 15:00:14.416: INFO: (8) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:1080/proxy/: test<... (200; 8.006891ms)
Jan  7 15:00:14.416: INFO: (8) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:443/proxy/: test (200; 10.511762ms)
Jan  7 15:00:14.418: INFO: (8) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 10.651173ms)
Jan  7 15:00:14.418: INFO: (8) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 10.670506ms)
Jan  7 15:00:14.420: INFO: (8) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:460/proxy/: tls baz (200; 12.467479ms)
Jan  7 15:00:14.423: INFO: (8) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname1/proxy/: foo (200; 15.699265ms)
Jan  7 15:00:14.424: INFO: (8) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname2/proxy/: bar (200; 15.809624ms)
Jan  7 15:00:14.424: INFO: (8) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname2/proxy/: tls qux (200; 15.762913ms)
Jan  7 15:00:14.424: INFO: (8) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname1/proxy/: tls baz (200; 15.880575ms)
Jan  7 15:00:14.424: INFO: (8) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname1/proxy/: foo (200; 15.911978ms)
Jan  7 15:00:14.425: INFO: (8) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname2/proxy/: bar (200; 17.015799ms)
Jan  7 15:00:14.434: INFO: (9) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname1/proxy/: foo (200; 9.406133ms)
Jan  7 15:00:14.434: INFO: (9) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname2/proxy/: bar (200; 9.548055ms)
Jan  7 15:00:14.435: INFO: (9) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:462/proxy/: tls qux (200; 9.652287ms)
Jan  7 15:00:14.435: INFO: (9) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:443/proxy/: test (200; 10.184946ms)
Jan  7 15:00:14.435: INFO: (9) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:1080/proxy/: ... (200; 10.051065ms)
Jan  7 15:00:14.435: INFO: (9) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname2/proxy/: bar (200; 10.297105ms)
Jan  7 15:00:14.436: INFO: (9) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:460/proxy/: tls baz (200; 10.875305ms)
Jan  7 15:00:14.436: INFO: (9) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname2/proxy/: tls qux (200; 10.938837ms)
Jan  7 15:00:14.436: INFO: (9) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname1/proxy/: tls baz (200; 11.05831ms)
Jan  7 15:00:14.436: INFO: (9) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 11.069781ms)
Jan  7 15:00:14.436: INFO: (9) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:1080/proxy/: test<... (200; 11.048832ms)
Jan  7 15:00:14.436: INFO: (9) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 11.307897ms)
Jan  7 15:00:14.439: INFO: (9) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 14.555964ms)
Jan  7 15:00:14.439: INFO: (9) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 14.715926ms)
Jan  7 15:00:14.450: INFO: (10) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 10.870756ms)
Jan  7 15:00:14.451: INFO: (10) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:1080/proxy/: ... (200; 11.001448ms)
Jan  7 15:00:14.454: INFO: (10) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:1080/proxy/: test<... (200; 14.491833ms)
Jan  7 15:00:14.454: INFO: (10) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88/proxy/: test (200; 14.415743ms)
Jan  7 15:00:14.457: INFO: (10) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 16.733254ms)
Jan  7 15:00:14.457: INFO: (10) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:460/proxy/: tls baz (200; 16.861133ms)
Jan  7 15:00:14.457: INFO: (10) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:443/proxy/: test (200; 11.587704ms)
Jan  7 15:00:14.486: INFO: (11) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:1080/proxy/: test<... (200; 11.94182ms)
Jan  7 15:00:14.487: INFO: (11) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 12.250085ms)
Jan  7 15:00:14.487: INFO: (11) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:1080/proxy/: ... (200; 12.797548ms)
Jan  7 15:00:14.487: INFO: (11) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname1/proxy/: foo (200; 12.939364ms)
Jan  7 15:00:14.487: INFO: (11) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 13.203405ms)
Jan  7 15:00:14.487: INFO: (11) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 13.069304ms)
Jan  7 15:00:14.487: INFO: (11) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:443/proxy/: test<... (200; 17.602871ms)
Jan  7 15:00:14.508: INFO: (12) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname1/proxy/: tls baz (200; 17.711706ms)
Jan  7 15:00:14.508: INFO: (12) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname2/proxy/: bar (200; 17.695775ms)
Jan  7 15:00:14.508: INFO: (12) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname2/proxy/: tls qux (200; 18.414741ms)
Jan  7 15:00:14.509: INFO: (12) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:460/proxy/: tls baz (200; 18.408162ms)
Jan  7 15:00:14.509: INFO: (12) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname1/proxy/: foo (200; 18.513614ms)
Jan  7 15:00:14.511: INFO: (12) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 20.906955ms)
Jan  7 15:00:14.511: INFO: (12) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:462/proxy/: tls qux (200; 21.432843ms)
Jan  7 15:00:14.511: INFO: (12) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:1080/proxy/: ... (200; 21.47718ms)
Jan  7 15:00:14.512: INFO: (12) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88/proxy/: test (200; 21.285585ms)
Jan  7 15:00:14.512: INFO: (12) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname2/proxy/: bar (200; 21.582606ms)
Jan  7 15:00:14.512: INFO: (12) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:443/proxy/: test<... (200; 19.313989ms)
Jan  7 15:00:14.534: INFO: (13) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:443/proxy/: test (200; 20.784134ms)
Jan  7 15:00:14.536: INFO: (13) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:1080/proxy/: ... (200; 21.85226ms)
Jan  7 15:00:14.556: INFO: (14) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname1/proxy/: foo (200; 18.717595ms)
Jan  7 15:00:14.557: INFO: (14) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:1080/proxy/: ... (200; 20.411587ms)
Jan  7 15:00:14.558: INFO: (14) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 20.428396ms)
Jan  7 15:00:14.558: INFO: (14) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname1/proxy/: foo (200; 20.823449ms)
Jan  7 15:00:14.559: INFO: (14) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:460/proxy/: tls baz (200; 22.269513ms)
Jan  7 15:00:14.566: INFO: (14) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:1080/proxy/: test<... (200; 28.438074ms)
Jan  7 15:00:14.567: INFO: (14) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname1/proxy/: tls baz (200; 29.939631ms)
Jan  7 15:00:14.567: INFO: (14) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:443/proxy/: test (200; 29.941554ms)
Jan  7 15:00:14.567: INFO: (14) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 30.778745ms)
Jan  7 15:00:14.567: INFO: (14) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 29.950507ms)
Jan  7 15:00:14.567: INFO: (14) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname2/proxy/: bar (200; 30.118638ms)
Jan  7 15:00:14.567: INFO: (14) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 29.964743ms)
Jan  7 15:00:14.576: INFO: (15) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88/proxy/: test (200; 8.468871ms)
Jan  7 15:00:14.580: INFO: (15) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 12.601321ms)
Jan  7 15:00:14.580: INFO: (15) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 12.730267ms)
Jan  7 15:00:14.580: INFO: (15) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:462/proxy/: tls qux (200; 13.163191ms)
Jan  7 15:00:14.580: INFO: (15) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname2/proxy/: tls qux (200; 12.888732ms)
Jan  7 15:00:14.580: INFO: (15) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:460/proxy/: tls baz (200; 13.258033ms)
Jan  7 15:00:14.583: INFO: (15) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname1/proxy/: tls baz (200; 15.388581ms)
Jan  7 15:00:14.583: INFO: (15) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:443/proxy/: ... (200; 16.319363ms)
Jan  7 15:00:14.584: INFO: (15) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname2/proxy/: bar (200; 16.732028ms)
Jan  7 15:00:14.584: INFO: (15) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 16.679434ms)
Jan  7 15:00:14.585: INFO: (15) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname1/proxy/: foo (200; 17.733567ms)
Jan  7 15:00:14.585: INFO: (15) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname1/proxy/: foo (200; 18.078401ms)
Jan  7 15:00:14.585: INFO: (15) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:1080/proxy/: test<... (200; 17.956311ms)
Jan  7 15:00:14.586: INFO: (15) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname2/proxy/: bar (200; 18.392196ms)
Jan  7 15:00:14.586: INFO: (15) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 18.822007ms)
Jan  7 15:00:14.591: INFO: (16) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 4.821068ms)
Jan  7 15:00:14.592: INFO: (16) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:462/proxy/: tls qux (200; 5.411853ms)
Jan  7 15:00:14.596: INFO: (16) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname2/proxy/: bar (200; 8.897336ms)
Jan  7 15:00:14.600: INFO: (16) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 12.802272ms)
Jan  7 15:00:14.601: INFO: (16) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:1080/proxy/: ... (200; 13.16022ms)
Jan  7 15:00:14.601: INFO: (16) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88/proxy/: test (200; 13.435757ms)
Jan  7 15:00:14.601: INFO: (16) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 13.345042ms)
Jan  7 15:00:14.601: INFO: (16) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:1080/proxy/: test<... (200; 13.890317ms)
Jan  7 15:00:14.601: INFO: (16) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname1/proxy/: foo (200; 14.086482ms)
Jan  7 15:00:14.601: INFO: (16) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname1/proxy/: foo (200; 13.904121ms)
Jan  7 15:00:14.601: INFO: (16) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:460/proxy/: tls baz (200; 14.033437ms)
Jan  7 15:00:14.601: INFO: (16) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname2/proxy/: tls qux (200; 14.247083ms)
Jan  7 15:00:14.601: INFO: (16) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:443/proxy/: test<... (200; 9.647018ms)
Jan  7 15:00:14.612: INFO: (17) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:443/proxy/: ... (200; 10.399247ms)
Jan  7 15:00:14.612: INFO: (17) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88/proxy/: test (200; 10.43171ms)
Jan  7 15:00:14.614: INFO: (17) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname1/proxy/: tls baz (200; 11.394361ms)
Jan  7 15:00:14.614: INFO: (17) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname2/proxy/: tls qux (200; 11.582017ms)
Jan  7 15:00:14.614: INFO: (17) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname2/proxy/: bar (200; 11.857242ms)
Jan  7 15:00:14.614: INFO: (17) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname2/proxy/: bar (200; 11.784211ms)
Jan  7 15:00:14.614: INFO: (17) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname1/proxy/: foo (200; 11.786505ms)
Jan  7 15:00:14.614: INFO: (17) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname1/proxy/: foo (200; 12.564823ms)
Jan  7 15:00:14.628: INFO: (18) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 13.07819ms)
Jan  7 15:00:14.628: INFO: (18) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:1080/proxy/: ... (200; 13.855499ms)
Jan  7 15:00:14.629: INFO: (18) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 13.85068ms)
Jan  7 15:00:14.629: INFO: (18) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname1/proxy/: foo (200; 14.295134ms)
Jan  7 15:00:14.629: INFO: (18) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname2/proxy/: bar (200; 14.481103ms)
Jan  7 15:00:14.629: INFO: (18) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88/proxy/: test (200; 14.564849ms)
Jan  7 15:00:14.630: INFO: (18) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:460/proxy/: tls baz (200; 15.075306ms)
Jan  7 15:00:14.630: INFO: (18) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 15.364515ms)
Jan  7 15:00:14.630: INFO: (18) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:443/proxy/: test<... (200; 17.864632ms)
Jan  7 15:00:14.633: INFO: (18) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname2/proxy/: tls qux (200; 18.012205ms)
Jan  7 15:00:14.638: INFO: (19) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 5.607808ms)
Jan  7 15:00:14.640: INFO: (19) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 7.467276ms)
Jan  7 15:00:14.641: INFO: (19) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:462/proxy/: tls qux (200; 7.680015ms)
Jan  7 15:00:14.641: INFO: (19) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:1080/proxy/: test<... (200; 8.473587ms)
Jan  7 15:00:14.641: INFO: (19) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:443/proxy/: test (200; 8.692774ms)
Jan  7 15:00:14.642: INFO: (19) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:160/proxy/: foo (200; 8.773575ms)
Jan  7 15:00:14.642: INFO: (19) /api/v1/namespaces/proxy-6276/pods/http:proxy-service-9k9vc-5cg88:1080/proxy/: ... (200; 8.921678ms)
Jan  7 15:00:14.642: INFO: (19) /api/v1/namespaces/proxy-6276/pods/https:proxy-service-9k9vc-5cg88:460/proxy/: tls baz (200; 9.315199ms)
Jan  7 15:00:14.646: INFO: (19) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname1/proxy/: foo (200; 13.188109ms)
Jan  7 15:00:14.646: INFO: (19) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname1/proxy/: tls baz (200; 13.351166ms)
Jan  7 15:00:14.646: INFO: (19) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname1/proxy/: foo (200; 13.454542ms)
Jan  7 15:00:14.646: INFO: (19) /api/v1/namespaces/proxy-6276/services/proxy-service-9k9vc:portname2/proxy/: bar (200; 13.558333ms)
Jan  7 15:00:14.647: INFO: (19) /api/v1/namespaces/proxy-6276/services/http:proxy-service-9k9vc:portname2/proxy/: bar (200; 13.671215ms)
Jan  7 15:00:14.647: INFO: (19) /api/v1/namespaces/proxy-6276/services/https:proxy-service-9k9vc:tlsportname2/proxy/: tls qux (200; 13.851363ms)
Jan  7 15:00:14.648: INFO: (19) /api/v1/namespaces/proxy-6276/pods/proxy-service-9k9vc-5cg88:162/proxy/: bar (200; 14.854004ms)
STEP: deleting ReplicationController proxy-service-9k9vc in namespace proxy-6276, will wait for the garbage collector to delete the pods
Jan  7 15:00:14.708: INFO: Deleting ReplicationController proxy-service-9k9vc took: 7.646803ms
Jan  7 15:00:15.009: INFO: Terminating ReplicationController proxy-service-9k9vc pods took: 300.831588ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 15:00:26.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6276" for this suite.
Jan  7 15:00:32.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 15:00:32.818: INFO: namespace proxy-6276 deletion completed in 6.158790172s

• [SLOW TEST:30.846 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 15:00:32.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-3180
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-3180
STEP: Deleting pre-stop pod
Jan  7 15:00:54.081: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 15:00:54.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-3180" for this suite.
Jan  7 15:01:40.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 15:01:40.352: INFO: namespace prestop-3180 deletion completed in 46.223469359s

• [SLOW TEST:67.534 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 15:01:40.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Jan  7 15:01:40.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9956 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan  7 15:01:49.356: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0107 15:01:47.832819    3409 log.go:172] (0xc0008b06e0) (0xc0003fafa0) Create stream\nI0107 15:01:47.833316    3409 log.go:172] (0xc0008b06e0) (0xc0003fafa0) Stream added, broadcasting: 1\nI0107 15:01:47.843089    3409 log.go:172] (0xc0008b06e0) Reply frame received for 1\nI0107 15:01:47.843193    3409 log.go:172] (0xc0008b06e0) (0xc0003fb040) Create stream\nI0107 15:01:47.843208    3409 log.go:172] (0xc0008b06e0) (0xc0003fb040) Stream added, broadcasting: 3\nI0107 15:01:47.845888    3409 log.go:172] (0xc0008b06e0) Reply frame received for 3\nI0107 15:01:47.846109    3409 log.go:172] (0xc0008b06e0) (0xc0005dc3c0) Create stream\nI0107 15:01:47.846130    3409 log.go:172] (0xc0008b06e0) (0xc0005dc3c0) Stream added, broadcasting: 5\nI0107 15:01:47.852436    3409 log.go:172] (0xc0008b06e0) Reply frame received for 5\nI0107 15:01:47.852487    3409 log.go:172] (0xc0008b06e0) (0xc0005dc460) Create stream\nI0107 15:01:47.852496    3409 log.go:172] (0xc0008b06e0) (0xc0005dc460) Stream added, broadcasting: 7\nI0107 15:01:47.860331    3409 log.go:172] (0xc0008b06e0) Reply frame received for 7\nI0107 15:01:47.861587    3409 log.go:172] (0xc0003fb040) (3) Writing data frame\nI0107 15:01:47.861793    3409 log.go:172] (0xc0003fb040) (3) Writing data frame\nI0107 15:01:47.916500    3409 log.go:172] (0xc0008b06e0) Data frame received for 5\nI0107 15:01:47.917737    3409 log.go:172] (0xc0005dc3c0) (5) Data frame handling\nI0107 15:01:47.917899    3409 log.go:172] (0xc0005dc3c0) (5) Data frame sent\nI0107 15:01:47.917952    3409 log.go:172] (0xc0008b06e0) Data frame received for 5\nI0107 15:01:47.917983    3409 log.go:172] (0xc0005dc3c0) (5) Data frame handling\nI0107 15:01:47.918264    3409 log.go:172] (0xc0005dc3c0) (5) Data frame sent\nI0107 15:01:49.316530    3409 log.go:172] (0xc0008b06e0) Data frame received for 1\nI0107 15:01:49.316656    3409 log.go:172] (0xc0008b06e0) (0xc0003fb040) Stream removed, broadcasting: 3\nI0107 15:01:49.316803    3409 log.go:172] (0xc0003fafa0) (1) Data frame handling\nI0107 15:01:49.316831    3409 log.go:172] (0xc0003fafa0) (1) Data frame sent\nI0107 15:01:49.316888    3409 log.go:172] (0xc0008b06e0) (0xc0003fafa0) Stream removed, broadcasting: 1\nI0107 15:01:49.317554    3409 log.go:172] (0xc0008b06e0) (0xc0005dc3c0) Stream removed, broadcasting: 5\nI0107 15:01:49.317914    3409 log.go:172] (0xc0008b06e0) (0xc0005dc460) Stream removed, broadcasting: 7\nI0107 15:01:49.318224    3409 log.go:172] (0xc0008b06e0) Go away received\nI0107 15:01:49.318821    3409 log.go:172] (0xc0008b06e0) (0xc0003fafa0) Stream removed, broadcasting: 1\nI0107 15:01:49.318899    3409 log.go:172] (0xc0008b06e0) (0xc0003fb040) Stream removed, broadcasting: 3\nI0107 15:01:49.318960    3409 log.go:172] (0xc0008b06e0) (0xc0005dc3c0) Stream removed, broadcasting: 5\nI0107 15:01:49.318995    3409 log.go:172] (0xc0008b06e0) (0xc0005dc460) Stream removed, broadcasting: 7\n"
Jan  7 15:01:49.356: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 15:01:51.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9956" for this suite.
Jan  7 15:01:57.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 15:01:57.524: INFO: namespace kubectl-9956 deletion completed in 6.146851681s

• [SLOW TEST:17.171 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 15:01:57.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-0686cd55-cc05-4f9e-8174-548328a3c982 in namespace container-probe-7669
Jan  7 15:02:07.631: INFO: Started pod test-webserver-0686cd55-cc05-4f9e-8174-548328a3c982 in namespace container-probe-7669
STEP: checking the pod's current state and verifying that restartCount is present
Jan  7 15:02:07.637: INFO: Initial restart count of pod test-webserver-0686cd55-cc05-4f9e-8174-548328a3c982 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 15:06:09.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7669" for this suite.
Jan  7 15:06:15.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 15:06:15.338: INFO: namespace container-probe-7669 deletion completed in 6.185098105s

• [SLOW TEST:257.814 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 15:06:15.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  7 15:06:15.540: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 21.9524ms)
Jan  7 15:06:15.547: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.120744ms)
Jan  7 15:06:15.553: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.962089ms)
Jan  7 15:06:15.562: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.865486ms)
Jan  7 15:06:15.571: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.27014ms)
Jan  7 15:06:15.579: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.975655ms)
Jan  7 15:06:15.587: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.882775ms)
Jan  7 15:06:15.594: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.355499ms)
Jan  7 15:06:15.603: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.427883ms)
Jan  7 15:06:15.651: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 48.096742ms)
Jan  7 15:06:15.661: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.912093ms)
Jan  7 15:06:15.671: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.365978ms)
Jan  7 15:06:15.685: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.816023ms)
Jan  7 15:06:15.692: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.935772ms)
Jan  7 15:06:15.704: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.811557ms)
Jan  7 15:06:15.711: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.609175ms)
Jan  7 15:06:15.718: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.69071ms)
Jan  7 15:06:15.725: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.610416ms)
Jan  7 15:06:15.734: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.846276ms)
Jan  7 15:06:15.744: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.657618ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 15:06:15.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4900" for this suite.
Jan  7 15:06:21.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 15:06:21.952: INFO: namespace proxy-4900 deletion completed in 6.200326831s

• [SLOW TEST:6.614 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 15:06:21.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-d15db8de-1b9b-43cd-89f9-e202e0803269
STEP: Creating a pod to test consume configMaps
Jan  7 15:06:22.064: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5dda0696-704a-4667-8b3b-a02296decdbe" in namespace "projected-5677" to be "success or failure"
Jan  7 15:06:22.072: INFO: Pod "pod-projected-configmaps-5dda0696-704a-4667-8b3b-a02296decdbe": Phase="Pending", Reason="", readiness=false. Elapsed: 7.355744ms
Jan  7 15:06:24.083: INFO: Pod "pod-projected-configmaps-5dda0696-704a-4667-8b3b-a02296decdbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018483053s
Jan  7 15:06:26.114: INFO: Pod "pod-projected-configmaps-5dda0696-704a-4667-8b3b-a02296decdbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049429242s
Jan  7 15:06:28.490: INFO: Pod "pod-projected-configmaps-5dda0696-704a-4667-8b3b-a02296decdbe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425768217s
Jan  7 15:06:30.503: INFO: Pod "pod-projected-configmaps-5dda0696-704a-4667-8b3b-a02296decdbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.438866207s
STEP: Saw pod success
Jan  7 15:06:30.503: INFO: Pod "pod-projected-configmaps-5dda0696-704a-4667-8b3b-a02296decdbe" satisfied condition "success or failure"
Jan  7 15:06:30.508: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-5dda0696-704a-4667-8b3b-a02296decdbe container projected-configmap-volume-test: 
STEP: delete the pod
Jan  7 15:06:30.591: INFO: Waiting for pod pod-projected-configmaps-5dda0696-704a-4667-8b3b-a02296decdbe to disappear
Jan  7 15:06:30.649: INFO: Pod pod-projected-configmaps-5dda0696-704a-4667-8b3b-a02296decdbe no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 15:06:30.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5677" for this suite.
Jan  7 15:06:36.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 15:06:36.838: INFO: namespace projected-5677 deletion completed in 6.174844869s

• [SLOW TEST:14.886 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 15:06:36.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan  7 15:06:36.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7296'
Jan  7 15:06:37.426: INFO: stderr: ""
Jan  7 15:06:37.427: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  7 15:06:37.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7296'
Jan  7 15:06:37.669: INFO: stderr: ""
Jan  7 15:06:37.670: INFO: stdout: "update-demo-nautilus-5nh4p update-demo-nautilus-p9xmj "
Jan  7 15:06:37.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5nh4p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7296'
Jan  7 15:06:37.881: INFO: stderr: ""
Jan  7 15:06:37.881: INFO: stdout: ""
Jan  7 15:06:37.881: INFO: update-demo-nautilus-5nh4p is created but not running
Jan  7 15:06:42.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7296'
Jan  7 15:06:43.429: INFO: stderr: ""
Jan  7 15:06:43.429: INFO: stdout: "update-demo-nautilus-5nh4p update-demo-nautilus-p9xmj "
Jan  7 15:06:43.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5nh4p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7296'
Jan  7 15:06:44.086: INFO: stderr: ""
Jan  7 15:06:44.086: INFO: stdout: ""
Jan  7 15:06:44.086: INFO: update-demo-nautilus-5nh4p is created but not running
Jan  7 15:06:49.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7296'
Jan  7 15:06:49.228: INFO: stderr: ""
Jan  7 15:06:49.228: INFO: stdout: "update-demo-nautilus-5nh4p update-demo-nautilus-p9xmj "
Jan  7 15:06:49.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5nh4p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7296'
Jan  7 15:06:49.388: INFO: stderr: ""
Jan  7 15:06:49.388: INFO: stdout: "true"
Jan  7 15:06:49.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5nh4p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7296'
Jan  7 15:06:49.510: INFO: stderr: ""
Jan  7 15:06:49.510: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  7 15:06:49.510: INFO: validating pod update-demo-nautilus-5nh4p
Jan  7 15:06:49.536: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  7 15:06:49.536: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  7 15:06:49.536: INFO: update-demo-nautilus-5nh4p is verified up and running
Jan  7 15:06:49.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p9xmj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7296'
Jan  7 15:06:49.684: INFO: stderr: ""
Jan  7 15:06:49.684: INFO: stdout: "true"
Jan  7 15:06:49.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p9xmj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7296'
Jan  7 15:06:49.772: INFO: stderr: ""
Jan  7 15:06:49.772: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  7 15:06:49.772: INFO: validating pod update-demo-nautilus-p9xmj
Jan  7 15:06:49.791: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  7 15:06:49.791: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  7 15:06:49.791: INFO: update-demo-nautilus-p9xmj is verified up and running
STEP: scaling down the replication controller
Jan  7 15:06:49.794: INFO: scanned /root for discovery docs: 
Jan  7 15:06:49.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7296'
Jan  7 15:06:50.977: INFO: stderr: ""
Jan  7 15:06:50.977: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  7 15:06:50.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7296'
Jan  7 15:06:51.110: INFO: stderr: ""
Jan  7 15:06:51.110: INFO: stdout: "update-demo-nautilus-5nh4p update-demo-nautilus-p9xmj "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  7 15:06:56.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7296'
Jan  7 15:06:56.328: INFO: stderr: ""
Jan  7 15:06:56.328: INFO: stdout: "update-demo-nautilus-p9xmj "
Jan  7 15:06:56.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p9xmj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7296'
Jan  7 15:06:56.498: INFO: stderr: ""
Jan  7 15:06:56.498: INFO: stdout: "true"
Jan  7 15:06:56.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p9xmj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7296'
Jan  7 15:06:56.696: INFO: stderr: ""
Jan  7 15:06:56.697: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  7 15:06:56.697: INFO: validating pod update-demo-nautilus-p9xmj
Jan  7 15:06:56.721: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  7 15:06:56.722: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  7 15:06:56.722: INFO: update-demo-nautilus-p9xmj is verified up and running
STEP: scaling up the replication controller
Jan  7 15:06:56.724: INFO: scanned /root for discovery docs: 
Jan  7 15:06:56.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7296'
Jan  7 15:06:58.051: INFO: stderr: ""
Jan  7 15:06:58.051: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  7 15:06:58.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7296'
Jan  7 15:06:58.331: INFO: stderr: ""
Jan  7 15:06:58.331: INFO: stdout: "update-demo-nautilus-p9xmj update-demo-nautilus-wwpwr "
Jan  7 15:06:58.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p9xmj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7296'
Jan  7 15:06:58.496: INFO: stderr: ""
Jan  7 15:06:58.497: INFO: stdout: "true"
Jan  7 15:06:58.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p9xmj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7296'
Jan  7 15:06:58.612: INFO: stderr: ""
Jan  7 15:06:58.613: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  7 15:06:58.613: INFO: validating pod update-demo-nautilus-p9xmj
Jan  7 15:06:58.619: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  7 15:06:58.619: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  7 15:06:58.619: INFO: update-demo-nautilus-p9xmj is verified up and running
Jan  7 15:06:58.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wwpwr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7296'
Jan  7 15:06:58.767: INFO: stderr: ""
Jan  7 15:06:58.767: INFO: stdout: ""
Jan  7 15:06:58.767: INFO: update-demo-nautilus-wwpwr is created but not running
Jan  7 15:07:03.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7296'
Jan  7 15:07:03.922: INFO: stderr: ""
Jan  7 15:07:03.923: INFO: stdout: "update-demo-nautilus-p9xmj update-demo-nautilus-wwpwr "
Jan  7 15:07:03.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p9xmj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7296'
Jan  7 15:07:04.047: INFO: stderr: ""
Jan  7 15:07:04.047: INFO: stdout: "true"
Jan  7 15:07:04.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p9xmj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7296'
Jan  7 15:07:04.199: INFO: stderr: ""
Jan  7 15:07:04.199: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  7 15:07:04.199: INFO: validating pod update-demo-nautilus-p9xmj
Jan  7 15:07:04.206: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  7 15:07:04.206: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  7 15:07:04.206: INFO: update-demo-nautilus-p9xmj is verified up and running
Jan  7 15:07:04.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wwpwr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7296'
Jan  7 15:07:04.339: INFO: stderr: ""
Jan  7 15:07:04.339: INFO: stdout: ""
Jan  7 15:07:04.339: INFO: update-demo-nautilus-wwpwr is created but not running
Jan  7 15:07:09.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7296'
Jan  7 15:07:09.489: INFO: stderr: ""
Jan  7 15:07:09.489: INFO: stdout: "update-demo-nautilus-p9xmj update-demo-nautilus-wwpwr "
Jan  7 15:07:09.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p9xmj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7296'
Jan  7 15:07:09.590: INFO: stderr: ""
Jan  7 15:07:09.590: INFO: stdout: "true"
Jan  7 15:07:09.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p9xmj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7296'
Jan  7 15:07:09.690: INFO: stderr: ""
Jan  7 15:07:09.690: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  7 15:07:09.690: INFO: validating pod update-demo-nautilus-p9xmj
Jan  7 15:07:09.694: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  7 15:07:09.694: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  7 15:07:09.694: INFO: update-demo-nautilus-p9xmj is verified up and running
Jan  7 15:07:09.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wwpwr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7296'
Jan  7 15:07:09.813: INFO: stderr: ""
Jan  7 15:07:09.813: INFO: stdout: "true"
Jan  7 15:07:09.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wwpwr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7296'
Jan  7 15:07:09.953: INFO: stderr: ""
Jan  7 15:07:09.953: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  7 15:07:09.953: INFO: validating pod update-demo-nautilus-wwpwr
Jan  7 15:07:09.969: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  7 15:07:09.969: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  7 15:07:09.969: INFO: update-demo-nautilus-wwpwr is verified up and running
STEP: using delete to clean up resources
Jan  7 15:07:09.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7296'
Jan  7 15:07:10.059: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  7 15:07:10.059: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  7 15:07:10.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7296'
Jan  7 15:07:10.192: INFO: stderr: "No resources found.\n"
Jan  7 15:07:10.192: INFO: stdout: ""
Jan  7 15:07:10.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7296 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  7 15:07:10.406: INFO: stderr: ""
Jan  7 15:07:10.406: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 15:07:10.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7296" for this suite.
Jan  7 15:07:32.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 15:07:32.544: INFO: namespace kubectl-7296 deletion completed in 22.130983728s

• [SLOW TEST:55.705 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 15:07:32.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Jan  7 15:07:32.642: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan  7 15:07:32.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1330'
Jan  7 15:07:33.178: INFO: stderr: ""
Jan  7 15:07:33.178: INFO: stdout: "service/redis-slave created\n"
Jan  7 15:07:33.179: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan  7 15:07:33.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1330'
Jan  7 15:07:33.486: INFO: stderr: ""
Jan  7 15:07:33.486: INFO: stdout: "service/redis-master created\n"
Jan  7 15:07:33.487: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan  7 15:07:33.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1330'
Jan  7 15:07:34.005: INFO: stderr: ""
Jan  7 15:07:34.005: INFO: stdout: "service/frontend created\n"
Jan  7 15:07:34.007: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan  7 15:07:34.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1330'
Jan  7 15:07:34.303: INFO: stderr: ""
Jan  7 15:07:34.304: INFO: stdout: "deployment.apps/frontend created\n"
Jan  7 15:07:34.304: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan  7 15:07:34.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1330'
Jan  7 15:07:34.940: INFO: stderr: ""
Jan  7 15:07:34.940: INFO: stdout: "deployment.apps/redis-master created\n"
Jan  7 15:07:34.941: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan  7 15:07:34.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1330'
Jan  7 15:07:35.996: INFO: stderr: ""
Jan  7 15:07:35.996: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Jan  7 15:07:35.996: INFO: Waiting for all frontend pods to be Running.
Jan  7 15:08:01.050: INFO: Waiting for frontend to serve content.
Jan  7 15:08:01.581: INFO: Trying to add a new entry to the guestbook.
Jan  7 15:08:01.652: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan  7 15:08:01.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1330'
Jan  7 15:08:02.148: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  7 15:08:02.149: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan  7 15:08:02.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1330'
Jan  7 15:08:02.413: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  7 15:08:02.413: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  7 15:08:02.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1330'
Jan  7 15:08:02.560: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  7 15:08:02.560: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  7 15:08:02.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1330'
Jan  7 15:08:02.727: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  7 15:08:02.727: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  7 15:08:02.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1330'
Jan  7 15:08:02.866: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  7 15:08:02.866: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  7 15:08:02.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1330'
Jan  7 15:08:03.161: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  7 15:08:03.161: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 15:08:03.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1330" for this suite.
Jan  7 15:08:49.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 15:08:49.416: INFO: namespace kubectl-1330 deletion completed in 46.15693175s

• [SLOW TEST:76.872 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 15:08:49.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0107 15:09:32.152433       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  7 15:09:32.152: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 15:09:32.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5709" for this suite.
Jan  7 15:09:40.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 15:09:42.096: INFO: namespace gc-5709 deletion completed in 9.938421105s

• [SLOW TEST:52.679 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 15:09:42.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-028bfa28-d9b4-4dcb-b6c6-2baea09e8e75
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-028bfa28-d9b4-4dcb-b6c6-2baea09e8e75
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 15:11:22.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4768" for this suite.
Jan  7 15:11:44.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 15:11:44.814: INFO: namespace configmap-4768 deletion completed in 22.149239281s

• [SLOW TEST:122.718 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 15:11:44.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Jan  7 15:11:44.942: INFO: Waiting up to 5m0s for pod "client-containers-b33db4bd-53c6-48af-ae3d-3c02a490b5db" in namespace "containers-3716" to be "success or failure"
Jan  7 15:11:44.975: INFO: Pod "client-containers-b33db4bd-53c6-48af-ae3d-3c02a490b5db": Phase="Pending", Reason="", readiness=false. Elapsed: 32.186946ms
Jan  7 15:11:46.983: INFO: Pod "client-containers-b33db4bd-53c6-48af-ae3d-3c02a490b5db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040350396s
Jan  7 15:11:48.991: INFO: Pod "client-containers-b33db4bd-53c6-48af-ae3d-3c02a490b5db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048221303s
Jan  7 15:11:51.002: INFO: Pod "client-containers-b33db4bd-53c6-48af-ae3d-3c02a490b5db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059966668s
Jan  7 15:11:53.013: INFO: Pod "client-containers-b33db4bd-53c6-48af-ae3d-3c02a490b5db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07010191s
STEP: Saw pod success
Jan  7 15:11:53.013: INFO: Pod "client-containers-b33db4bd-53c6-48af-ae3d-3c02a490b5db" satisfied condition "success or failure"
Jan  7 15:11:53.017: INFO: Trying to get logs from node iruya-node pod client-containers-b33db4bd-53c6-48af-ae3d-3c02a490b5db container test-container: 
STEP: delete the pod
Jan  7 15:11:53.131: INFO: Waiting for pod client-containers-b33db4bd-53c6-48af-ae3d-3c02a490b5db to disappear
Jan  7 15:11:53.140: INFO: Pod client-containers-b33db4bd-53c6-48af-ae3d-3c02a490b5db no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 15:11:53.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3716" for this suite.
Jan  7 15:11:59.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 15:11:59.357: INFO: namespace containers-3716 deletion completed in 6.209413401s

• [SLOW TEST:14.542 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 15:11:59.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-e069e7c8-29ec-4468-996c-856402fada37 in namespace container-probe-7872
Jan  7 15:12:07.446: INFO: Started pod liveness-e069e7c8-29ec-4468-996c-856402fada37 in namespace container-probe-7872
STEP: checking the pod's current state and verifying that restartCount is present
Jan  7 15:12:07.450: INFO: Initial restart count of pod liveness-e069e7c8-29ec-4468-996c-856402fada37 is 0
Jan  7 15:12:23.536: INFO: Restart count of pod container-probe-7872/liveness-e069e7c8-29ec-4468-996c-856402fada37 is now 1 (16.086519487s elapsed)
Jan  7 15:12:43.661: INFO: Restart count of pod container-probe-7872/liveness-e069e7c8-29ec-4468-996c-856402fada37 is now 2 (36.210985844s elapsed)
Jan  7 15:13:03.749: INFO: Restart count of pod container-probe-7872/liveness-e069e7c8-29ec-4468-996c-856402fada37 is now 3 (56.298900913s elapsed)
Jan  7 15:13:23.903: INFO: Restart count of pod container-probe-7872/liveness-e069e7c8-29ec-4468-996c-856402fada37 is now 4 (1m16.452602712s elapsed)
Jan  7 15:14:30.419: INFO: Restart count of pod container-probe-7872/liveness-e069e7c8-29ec-4468-996c-856402fada37 is now 5 (2m22.968759638s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 15:14:30.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7872" for this suite.
Jan  7 15:14:36.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 15:14:36.673: INFO: namespace container-probe-7872 deletion completed in 6.212500759s

• [SLOW TEST:157.316 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 15:14:36.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 15:15:08.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3541" for this suite.
Jan  7 15:15:15.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 15:15:15.211: INFO: namespace namespaces-3541 deletion completed in 6.213795347s
STEP: Destroying namespace "nsdeletetest-2847" for this suite.
Jan  7 15:15:15.214: INFO: Namespace nsdeletetest-2847 was already deleted
STEP: Destroying namespace "nsdeletetest-4911" for this suite.
Jan  7 15:15:21.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 15:15:21.359: INFO: namespace nsdeletetest-4911 deletion completed in 6.14443479s

• [SLOW TEST:44.685 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 15:15:21.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Jan  7 15:15:21.477: INFO: Waiting up to 5m0s for pod "client-containers-648c6279-aaa3-4fad-8782-ce186a7ae513" in namespace "containers-2927" to be "success or failure"
Jan  7 15:15:21.484: INFO: Pod "client-containers-648c6279-aaa3-4fad-8782-ce186a7ae513": Phase="Pending", Reason="", readiness=false. Elapsed: 7.404248ms
Jan  7 15:15:23.499: INFO: Pod "client-containers-648c6279-aaa3-4fad-8782-ce186a7ae513": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02201144s
Jan  7 15:15:25.511: INFO: Pod "client-containers-648c6279-aaa3-4fad-8782-ce186a7ae513": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033924748s
Jan  7 15:15:27.518: INFO: Pod "client-containers-648c6279-aaa3-4fad-8782-ce186a7ae513": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041736785s
Jan  7 15:15:29.526: INFO: Pod "client-containers-648c6279-aaa3-4fad-8782-ce186a7ae513": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04928977s
Jan  7 15:15:31.537: INFO: Pod "client-containers-648c6279-aaa3-4fad-8782-ce186a7ae513": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060198693s
STEP: Saw pod success
Jan  7 15:15:31.537: INFO: Pod "client-containers-648c6279-aaa3-4fad-8782-ce186a7ae513" satisfied condition "success or failure"
Jan  7 15:15:31.543: INFO: Trying to get logs from node iruya-node pod client-containers-648c6279-aaa3-4fad-8782-ce186a7ae513 container test-container: 
STEP: delete the pod
Jan  7 15:15:31.672: INFO: Waiting for pod client-containers-648c6279-aaa3-4fad-8782-ce186a7ae513 to disappear
Jan  7 15:15:31.721: INFO: Pod client-containers-648c6279-aaa3-4fad-8782-ce186a7ae513 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 15:15:31.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2927" for this suite.
Jan  7 15:15:37.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 15:15:37.916: INFO: namespace containers-2927 deletion completed in 6.187088868s

• [SLOW TEST:16.557 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 15:15:37.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan  7 15:15:48.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-060c6e7a-e054-4352-8064-98edd204a723 -c busybox-main-container --namespace=emptydir-7591 -- cat /usr/share/volumeshare/shareddata.txt'
Jan  7 15:15:50.857: INFO: stderr: "I0107 15:15:50.390712    4294 log.go:172] (0xc0004860b0) (0xc000aca0a0) Create stream\nI0107 15:15:50.390800    4294 log.go:172] (0xc0004860b0) (0xc000aca0a0) Stream added, broadcasting: 1\nI0107 15:15:50.400526    4294 log.go:172] (0xc0004860b0) Reply frame received for 1\nI0107 15:15:50.400596    4294 log.go:172] (0xc0004860b0) (0xc00068e3c0) Create stream\nI0107 15:15:50.400609    4294 log.go:172] (0xc0004860b0) (0xc00068e3c0) Stream added, broadcasting: 3\nI0107 15:15:50.402661    4294 log.go:172] (0xc0004860b0) Reply frame received for 3\nI0107 15:15:50.402683    4294 log.go:172] (0xc0004860b0) (0xc00078a0a0) Create stream\nI0107 15:15:50.402693    4294 log.go:172] (0xc0004860b0) (0xc00078a0a0) Stream added, broadcasting: 5\nI0107 15:15:50.404405    4294 log.go:172] (0xc0004860b0) Reply frame received for 5\nI0107 15:15:50.643230    4294 log.go:172] (0xc0004860b0) Data frame received for 3\nI0107 15:15:50.643535    4294 log.go:172] (0xc00068e3c0) (3) Data frame handling\nI0107 15:15:50.643667    4294 log.go:172] (0xc00068e3c0) (3) Data frame sent\nI0107 15:15:50.829091    4294 log.go:172] (0xc0004860b0) Data frame received for 1\nI0107 15:15:50.829218    4294 log.go:172] (0xc000aca0a0) (1) Data frame handling\nI0107 15:15:50.829252    4294 log.go:172] (0xc000aca0a0) (1) Data frame sent\nI0107 15:15:50.829928    4294 log.go:172] (0xc0004860b0) (0xc000aca0a0) Stream removed, broadcasting: 1\nI0107 15:15:50.833860    4294 log.go:172] (0xc0004860b0) (0xc00068e3c0) Stream removed, broadcasting: 3\nI0107 15:15:50.834961    4294 log.go:172] (0xc0004860b0) (0xc00078a0a0) Stream removed, broadcasting: 5\nI0107 15:15:50.835170    4294 log.go:172] (0xc0004860b0) Go away received\nI0107 15:15:50.835634    4294 log.go:172] (0xc0004860b0) (0xc000aca0a0) Stream removed, broadcasting: 1\nI0107 15:15:50.836303    4294 log.go:172] (0xc0004860b0) (0xc00068e3c0) Stream removed, broadcasting: 3\nI0107 15:15:50.836342    4294 log.go:172] (0xc0004860b0) (0xc00078a0a0) Stream removed, broadcasting: 5\n"
Jan  7 15:15:50.858: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 15:15:50.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7591" for this suite.
Jan  7 15:15:56.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 15:15:57.056: INFO: namespace emptydir-7591 deletion completed in 6.182494137s

• [SLOW TEST:19.140 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  7 15:15:57.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Jan  7 15:15:57.165: INFO: Waiting up to 5m0s for pod "client-containers-9830e1e7-752a-4d89-b9b8-d24f5e29ab46" in namespace "containers-7051" to be "success or failure"
Jan  7 15:15:57.171: INFO: Pod "client-containers-9830e1e7-752a-4d89-b9b8-d24f5e29ab46": Phase="Pending", Reason="", readiness=false. Elapsed: 5.57337ms
Jan  7 15:15:59.177: INFO: Pod "client-containers-9830e1e7-752a-4d89-b9b8-d24f5e29ab46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01081666s
Jan  7 15:16:01.196: INFO: Pod "client-containers-9830e1e7-752a-4d89-b9b8-d24f5e29ab46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029758255s
Jan  7 15:16:03.205: INFO: Pod "client-containers-9830e1e7-752a-4d89-b9b8-d24f5e29ab46": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038887645s
Jan  7 15:16:05.214: INFO: Pod "client-containers-9830e1e7-752a-4d89-b9b8-d24f5e29ab46": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048291413s
Jan  7 15:16:07.223: INFO: Pod "client-containers-9830e1e7-752a-4d89-b9b8-d24f5e29ab46": Phase="Pending", Reason="", readiness=false. Elapsed: 10.057171313s
Jan  7 15:16:09.234: INFO: Pod "client-containers-9830e1e7-752a-4d89-b9b8-d24f5e29ab46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.0678762s
STEP: Saw pod success
Jan  7 15:16:09.234: INFO: Pod "client-containers-9830e1e7-752a-4d89-b9b8-d24f5e29ab46" satisfied condition "success or failure"
Jan  7 15:16:09.239: INFO: Trying to get logs from node iruya-node pod client-containers-9830e1e7-752a-4d89-b9b8-d24f5e29ab46 container test-container: 
STEP: delete the pod
Jan  7 15:16:09.299: INFO: Waiting for pod client-containers-9830e1e7-752a-4d89-b9b8-d24f5e29ab46 to disappear
Jan  7 15:16:09.337: INFO: Pod client-containers-9830e1e7-752a-4d89-b9b8-d24f5e29ab46 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  7 15:16:09.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7051" for this suite.
Jan  7 15:16:15.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 15:16:15.557: INFO: namespace containers-7051 deletion completed in 6.214421785s

• [SLOW TEST:18.501 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSJan  7 15:16:15.558: INFO: Running AfterSuite actions on all nodes
Jan  7 15:16:15.558: INFO: Running AfterSuite actions on node 1
Jan  7 15:16:15.558: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8390.553 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS