I0501 15:26:17.023229 6 e2e.go:243] Starting e2e run "9715c762-004b-4ffc-b2a8-e7d7a88754c5" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588346776 - Will randomize all specs Will run 215 of 4412 specs May 1 15:26:17.215: INFO: >>> kubeConfig: /root/.kube/config May 1 15:26:17.220: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 1 15:26:17.237: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 1 15:26:17.270: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 1 15:26:17.270: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 1 15:26:17.270: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 1 15:26:17.278: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 1 15:26:17.278: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 1 15:26:17.278: INFO: e2e test version: v1.15.11 May 1 15:26:17.279: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:26:17.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api May 1 15:26:17.337: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 1 15:26:17.344: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec62e861-a07f-4b45-8cc4-3bda5de0873d" in namespace "downward-api-516" to be "success or failure" May 1 15:26:17.410: INFO: Pod "downwardapi-volume-ec62e861-a07f-4b45-8cc4-3bda5de0873d": Phase="Pending", Reason="", readiness=false. Elapsed: 66.320789ms May 1 15:26:19.415: INFO: Pod "downwardapi-volume-ec62e861-a07f-4b45-8cc4-3bda5de0873d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071162552s May 1 15:26:21.423: INFO: Pod "downwardapi-volume-ec62e861-a07f-4b45-8cc4-3bda5de0873d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079226666s May 1 15:26:23.428: INFO: Pod "downwardapi-volume-ec62e861-a07f-4b45-8cc4-3bda5de0873d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083950548s STEP: Saw pod success May 1 15:26:23.428: INFO: Pod "downwardapi-volume-ec62e861-a07f-4b45-8cc4-3bda5de0873d" satisfied condition "success or failure" May 1 15:26:23.432: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ec62e861-a07f-4b45-8cc4-3bda5de0873d container client-container: STEP: delete the pod May 1 15:26:23.472: INFO: Waiting for pod downwardapi-volume-ec62e861-a07f-4b45-8cc4-3bda5de0873d to disappear May 1 15:26:23.476: INFO: Pod downwardapi-volume-ec62e861-a07f-4b45-8cc4-3bda5de0873d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:26:23.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-516" for this suite. May 1 15:26:29.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:26:29.594: INFO: namespace downward-api-516 deletion completed in 6.114713703s • [SLOW TEST:12.315 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:26:29.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:26:37.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8767" for this suite. May 1 15:26:59.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:26:59.528: INFO: namespace replication-controller-8767 deletion completed in 22.088047895s • [SLOW TEST:29.934 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:26:59.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 15:26:59.657: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 1 15:26:59.724: INFO: Pod name sample-pod: Found 0 pods out of 1 May 1 15:27:04.732: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 1 15:27:06.740: INFO: Creating deployment "test-rolling-update-deployment" May 1 15:27:06.744: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 1 15:27:06.774: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 1 15:27:08.875: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 1 15:27:08.878: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943626, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943626, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943626, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943626, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:27:10.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943626, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943626, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943626, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943626, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:27:12.882: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 1 15:27:12.892: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-6989,SelfLink:/apis/apps/v1/namespaces/deployment-6989/deployments/test-rolling-update-deployment,UID:493efe06-f6ec-409d-9906-3c3b223485ea,ResourceVersion:8455573,Generation:1,CreationTimestamp:2020-05-01 15:27:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-01 15:27:06 +0000 UTC 2020-05-01 15:27:06 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-01 15:27:11 +0000 UTC 2020-05-01 15:27:06 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 1 15:27:12.896: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-6989,SelfLink:/apis/apps/v1/namespaces/deployment-6989/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:62e4a8cb-5dd6-4596-88e1-dc329f1a67d8,ResourceVersion:8455561,Generation:1,CreationTimestamp:2020-05-01 15:27:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 493efe06-f6ec-409d-9906-3c3b223485ea 0xc002242df7 0xc002242df8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 1 15:27:12.896: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 1 15:27:12.896: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-6989,SelfLink:/apis/apps/v1/namespaces/deployment-6989/replicasets/test-rolling-update-controller,UID:4200e75c-a810-45dd-b816-8945e9986b7e,ResourceVersion:8455572,Generation:2,CreationTimestamp:2020-05-01 15:26:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 493efe06-f6ec-409d-9906-3c3b223485ea 0xc002242d0f 0xc002242d20}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 1 15:27:12.900: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-qf5kv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-qf5kv,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-6989,SelfLink:/api/v1/namespaces/deployment-6989/pods/test-rolling-update-deployment-79f6b9d75c-qf5kv,UID:5098e212-00dc-4441-a70e-5f6601e64d0b,ResourceVersion:8455560,Generation:0,CreationTimestamp:2020-05-01 15:27:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 62e4a8cb-5dd6-4596-88e1-dc329f1a67d8 0xc002132ab7 0xc002132ab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dbnrx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dbnrx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-dbnrx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002132b30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002132b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:27:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:27:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:27:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:27:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.235,StartTime:2020-05-01 15:27:06 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-01 15:27:10 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://3b898b7e78465f88fdc1aaa3550ca99361d2506200f3932806a351f215bf97d4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:27:12.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6989" for this suite. May 1 15:27:18.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:27:19.007: INFO: namespace deployment-6989 deletion completed in 6.103910785s • [SLOW TEST:19.479 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:27:19.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:28:19.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6612" for this suite. May 1 15:28:41.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:28:41.278: INFO: namespace container-probe-6612 deletion completed in 22.180124383s • [SLOW TEST:82.270 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:28:41.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 15:28:41.573: INFO: Creating ReplicaSet my-hostname-basic-b26caf91-f8a1-4b3a-84b1-a4c6b3dde466 May 1 15:28:41.619: INFO: Pod name my-hostname-basic-b26caf91-f8a1-4b3a-84b1-a4c6b3dde466: Found 0 pods out of 1 May 1 15:28:46.624: INFO: Pod name my-hostname-basic-b26caf91-f8a1-4b3a-84b1-a4c6b3dde466: Found 1 pods out of 1 May 1 15:28:46.624: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-b26caf91-f8a1-4b3a-84b1-a4c6b3dde466" is running May 1 15:28:46.627: INFO: Pod "my-hostname-basic-b26caf91-f8a1-4b3a-84b1-a4c6b3dde466-rdmfs" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 15:28:41 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 15:28:45 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 15:28:45 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 15:28:41 +0000 UTC Reason: Message:}]) May 1 15:28:46.627: INFO: Trying to dial the pod May 1 15:28:51.837: INFO: Controller my-hostname-basic-b26caf91-f8a1-4b3a-84b1-a4c6b3dde466: Got expected result from replica 1 [my-hostname-basic-b26caf91-f8a1-4b3a-84b1-a4c6b3dde466-rdmfs]: "my-hostname-basic-b26caf91-f8a1-4b3a-84b1-a4c6b3dde466-rdmfs", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:28:51.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9732" for this suite. May 1 15:28:57.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:28:58.002: INFO: namespace replicaset-9732 deletion completed in 6.16151983s • [SLOW TEST:16.724 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:28:58.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 1 15:28:58.105: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 1 15:28:58.111: INFO: Waiting for terminating namespaces to be deleted... May 1 15:28:58.114: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 1 15:28:58.119: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 1 15:28:58.119: INFO: Container kube-proxy ready: true, restart count 0 May 1 15:28:58.119: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 1 15:28:58.119: INFO: Container kindnet-cni ready: true, restart count 0 May 1 15:28:58.119: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 1 15:28:58.124: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 1 15:28:58.124: INFO: Container coredns ready: true, restart count 0 May 1 15:28:58.124: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 1 15:28:58.124: INFO: Container coredns ready: true, restart count 0 May 1 15:28:58.124: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 1 15:28:58.124: INFO: Container kindnet-cni ready: true, restart count 0 May 1 15:28:58.124: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 1 15:28:58.124: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 May 1 15:28:58.211: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 May 1 15:28:58.211: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 May 1 15:28:58.211: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker May 1 15:28:58.211: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 May 1 15:28:58.211: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker May 1 15:28:58.211: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-9a6f81e6-f923-4354-b19b-157b7ba3ae22.160af11ff986f9a9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4722/filler-pod-9a6f81e6-f923-4354-b19b-157b7ba3ae22 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-9a6f81e6-f923-4354-b19b-157b7ba3ae22.160af1205b2d66c1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9a6f81e6-f923-4354-b19b-157b7ba3ae22.160af120e7052952], Reason = [Created], Message = [Created container filler-pod-9a6f81e6-f923-4354-b19b-157b7ba3ae22] STEP: Considering event: Type = [Normal], Name = [filler-pod-9a6f81e6-f923-4354-b19b-157b7ba3ae22.160af120f707aeef], Reason = [Started], Message = [Started container filler-pod-9a6f81e6-f923-4354-b19b-157b7ba3ae22] STEP: Considering event: Type = [Normal], Name = [filler-pod-be18979c-1bc1-4060-86c7-9c76a5d24d35.160af11ffb1fb45e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4722/filler-pod-be18979c-1bc1-4060-86c7-9c76a5d24d35 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-be18979c-1bc1-4060-86c7-9c76a5d24d35.160af1207dfe27a9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-be18979c-1bc1-4060-86c7-9c76a5d24d35.160af120eb573ee7], Reason = [Created], Message = [Created container filler-pod-be18979c-1bc1-4060-86c7-9c76a5d24d35] STEP: Considering event: Type = [Normal], Name = [filler-pod-be18979c-1bc1-4060-86c7-9c76a5d24d35.160af120fa9b74d7], Reason = [Started], Message = [Started container filler-pod-be18979c-1bc1-4060-86c7-9c76a5d24d35] STEP: Considering event: Type = [Warning], Name = [additional-pod.160af12161d6856f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:29:05.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4722" for this suite. May 1 15:29:13.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:29:13.477: INFO: namespace sched-pred-4722 deletion completed in 8.112971702s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:15.475 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:29:13.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-9748/configmap-test-1a2a9270-9887-4b4c-981f-b16511f7a161 STEP: Creating a pod to test consume configMaps May 1 15:29:13.553: INFO: Waiting up to 5m0s for pod "pod-configmaps-513788bf-5d50-4069-b832-cfa6b85b774c" in namespace "configmap-9748" to be "success or failure" May 1 15:29:13.556: INFO: Pod "pod-configmaps-513788bf-5d50-4069-b832-cfa6b85b774c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.231104ms May 1 15:29:15.699: INFO: Pod "pod-configmaps-513788bf-5d50-4069-b832-cfa6b85b774c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145628037s May 1 15:29:17.702: INFO: Pod "pod-configmaps-513788bf-5d50-4069-b832-cfa6b85b774c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.149130577s STEP: Saw pod success May 1 15:29:17.702: INFO: Pod "pod-configmaps-513788bf-5d50-4069-b832-cfa6b85b774c" satisfied condition "success or failure" May 1 15:29:17.704: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-513788bf-5d50-4069-b832-cfa6b85b774c container env-test: STEP: delete the pod May 1 15:29:17.850: INFO: Waiting for pod pod-configmaps-513788bf-5d50-4069-b832-cfa6b85b774c to disappear May 1 15:29:18.051: INFO: Pod pod-configmaps-513788bf-5d50-4069-b832-cfa6b85b774c no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:29:18.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9748" for this suite. May 1 15:29:24.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:29:24.166: INFO: namespace configmap-9748 deletion completed in 6.11145946s • [SLOW TEST:10.688 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:29:24.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-a3e9aa34-e7e2-4a92-9280-e7c9a148e890 STEP: Creating a pod to test consume secrets May 1 15:29:24.271: INFO: Waiting up to 5m0s for pod "pod-secrets-de572118-2a8b-4dc0-b0f8-6e3962a38e1e" in namespace "secrets-4448" to be "success or failure" May 1 15:29:24.281: INFO: Pod "pod-secrets-de572118-2a8b-4dc0-b0f8-6e3962a38e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.767735ms May 1 15:29:26.357: INFO: Pod "pod-secrets-de572118-2a8b-4dc0-b0f8-6e3962a38e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085763398s May 1 15:29:28.361: INFO: Pod "pod-secrets-de572118-2a8b-4dc0-b0f8-6e3962a38e1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089624795s STEP: Saw pod success May 1 15:29:28.361: INFO: Pod "pod-secrets-de572118-2a8b-4dc0-b0f8-6e3962a38e1e" satisfied condition "success or failure" May 1 15:29:28.364: INFO: Trying to get logs from node iruya-worker pod pod-secrets-de572118-2a8b-4dc0-b0f8-6e3962a38e1e container secret-volume-test: STEP: delete the pod May 1 15:29:28.397: INFO: Waiting for pod pod-secrets-de572118-2a8b-4dc0-b0f8-6e3962a38e1e to disappear May 1 15:29:28.407: INFO: Pod pod-secrets-de572118-2a8b-4dc0-b0f8-6e3962a38e1e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:29:28.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4448" for this suite. May 1 15:29:34.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:29:34.500: INFO: namespace secrets-4448 deletion completed in 6.089634759s • [SLOW TEST:10.333 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:29:34.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 1 15:29:34.563: INFO: Waiting up to 5m0s for pod "downward-api-c0e53269-eab2-4fb9-9fda-7fd83d84129c" in namespace "downward-api-1050" to be "success or failure" May 1 15:29:34.570: INFO: Pod "downward-api-c0e53269-eab2-4fb9-9fda-7fd83d84129c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.92259ms May 1 15:29:36.575: INFO: Pod "downward-api-c0e53269-eab2-4fb9-9fda-7fd83d84129c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011302844s May 1 15:29:38.705: INFO: Pod "downward-api-c0e53269-eab2-4fb9-9fda-7fd83d84129c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1418762s STEP: Saw pod success May 1 15:29:38.705: INFO: Pod "downward-api-c0e53269-eab2-4fb9-9fda-7fd83d84129c" satisfied condition "success or failure" May 1 15:29:38.708: INFO: Trying to get logs from node iruya-worker2 pod downward-api-c0e53269-eab2-4fb9-9fda-7fd83d84129c container dapi-container: STEP: delete the pod May 1 15:29:38.740: INFO: Waiting for pod downward-api-c0e53269-eab2-4fb9-9fda-7fd83d84129c to disappear May 1 15:29:38.755: INFO: Pod downward-api-c0e53269-eab2-4fb9-9fda-7fd83d84129c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:29:38.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1050" for this suite. May 1 15:29:44.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:29:44.923: INFO: namespace downward-api-1050 deletion completed in 6.164965842s • [SLOW TEST:10.423 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:29:44.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 1 15:29:44.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8492' May 1 15:29:50.623: INFO: stderr: "" May 1 15:29:50.623: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 15:29:50.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8492' May 1 15:29:50.734: INFO: stderr: "" May 1 15:29:50.735: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 May 1 15:29:55.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8492' May 1 15:29:55.821: INFO: stderr: "" May 1 15:29:55.821: INFO: stdout: "update-demo-nautilus-4jlzm update-demo-nautilus-9ghzb " May 1 15:29:55.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4jlzm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8492' May 1 15:29:55.994: INFO: stderr: "" May 1 15:29:55.994: INFO: stdout: "" May 1 15:29:55.994: INFO: update-demo-nautilus-4jlzm is created but not running May 1 15:30:00.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8492' May 1 15:30:01.101: INFO: stderr: "" May 1 15:30:01.101: INFO: stdout: "update-demo-nautilus-4jlzm update-demo-nautilus-9ghzb " May 1 15:30:01.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4jlzm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8492' May 1 15:30:01.187: INFO: stderr: "" May 1 15:30:01.187: INFO: stdout: "true" May 1 15:30:01.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4jlzm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8492' May 1 15:30:01.275: INFO: stderr: "" May 1 15:30:01.275: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 15:30:01.275: INFO: validating pod update-demo-nautilus-4jlzm May 1 15:30:01.278: INFO: got data: { "image": "nautilus.jpg" } May 1 15:30:01.278: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 15:30:01.278: INFO: update-demo-nautilus-4jlzm is verified up and running May 1 15:30:01.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ghzb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8492' May 1 15:30:01.364: INFO: stderr: "" May 1 15:30:01.364: INFO: stdout: "true" May 1 15:30:01.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ghzb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8492' May 1 15:30:01.471: INFO: stderr: "" May 1 15:30:01.471: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 15:30:01.471: INFO: validating pod update-demo-nautilus-9ghzb May 1 15:30:01.475: INFO: got data: { "image": "nautilus.jpg" } May 1 15:30:01.475: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 15:30:01.475: INFO: update-demo-nautilus-9ghzb is verified up and running STEP: scaling down the replication controller May 1 15:30:01.477: INFO: scanned /root for discovery docs: May 1 15:30:01.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8492' May 1 15:30:02.664: INFO: stderr: "" May 1 15:30:02.664: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 15:30:02.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8492' May 1 15:30:03.024: INFO: stderr: "" May 1 15:30:03.024: INFO: stdout: "update-demo-nautilus-4jlzm update-demo-nautilus-9ghzb " STEP: Replicas for name=update-demo: expected=1 actual=2 May 1 15:30:08.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8492' May 1 15:30:08.123: INFO: stderr: "" May 1 15:30:08.123: INFO: stdout: "update-demo-nautilus-9ghzb " May 1 15:30:08.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ghzb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8492' May 1 15:30:08.223: INFO: stderr: "" May 1 15:30:08.223: INFO: stdout: "true" May 1 15:30:08.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ghzb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8492' May 1 15:30:08.327: INFO: stderr: "" May 1 15:30:08.327: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 15:30:08.327: INFO: validating pod update-demo-nautilus-9ghzb May 1 15:30:08.331: INFO: got data: { "image": "nautilus.jpg" } May 1 15:30:08.331: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 15:30:08.331: INFO: update-demo-nautilus-9ghzb is verified up and running STEP: scaling up the replication controller May 1 15:30:08.333: INFO: scanned /root for discovery docs: May 1 15:30:08.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8492' May 1 15:30:09.463: INFO: stderr: "" May 1 15:30:09.463: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 15:30:09.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8492' May 1 15:30:09.556: INFO: stderr: "" May 1 15:30:09.556: INFO: stdout: "update-demo-nautilus-49lwj update-demo-nautilus-9ghzb " May 1 15:30:09.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-49lwj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8492' May 1 15:30:09.657: INFO: stderr: "" May 1 15:30:09.657: INFO: stdout: "" May 1 15:30:09.657: INFO: update-demo-nautilus-49lwj is created but not running May 1 15:30:14.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8492' May 1 15:30:14.769: INFO: stderr: "" May 1 15:30:14.769: INFO: stdout: "update-demo-nautilus-49lwj update-demo-nautilus-9ghzb " May 1 15:30:14.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-49lwj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8492' May 1 15:30:14.859: INFO: stderr: "" May 1 15:30:14.859: INFO: stdout: "true" May 1 15:30:14.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-49lwj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8492' May 1 15:30:14.944: INFO: stderr: "" May 1 15:30:14.944: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 15:30:14.944: INFO: validating pod update-demo-nautilus-49lwj May 1 15:30:14.949: INFO: got data: { "image": "nautilus.jpg" } May 1 15:30:14.949: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 15:30:14.949: INFO: update-demo-nautilus-49lwj is verified up and running May 1 15:30:14.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ghzb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8492' May 1 15:30:15.042: INFO: stderr: "" May 1 15:30:15.042: INFO: stdout: "true" May 1 15:30:15.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ghzb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8492' May 1 15:30:15.140: INFO: stderr: "" May 1 15:30:15.140: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 15:30:15.140: INFO: validating pod update-demo-nautilus-9ghzb May 1 15:30:15.144: INFO: got data: { "image": "nautilus.jpg" } May 1 15:30:15.144: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 15:30:15.144: INFO: update-demo-nautilus-9ghzb is verified up and running STEP: using delete to clean up resources May 1 15:30:15.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8492' May 1 15:30:15.239: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 15:30:15.239: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 1 15:30:15.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8492' May 1 15:30:15.326: INFO: stderr: "No resources found.\n" May 1 15:30:15.326: INFO: stdout: "" May 1 15:30:15.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8492 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 1 15:30:15.421: INFO: stderr: "" May 1 15:30:15.421: INFO: stdout: "update-demo-nautilus-49lwj\nupdate-demo-nautilus-9ghzb\n" May 1 15:30:15.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8492' May 1 15:30:16.015: INFO: stderr: "No resources found.\n" May 1 15:30:16.015: INFO: stdout: "" May 1 15:30:16.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8492 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 1 15:30:16.110: INFO: stderr: "" May 1 15:30:16.111: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:30:16.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8492" for this suite. May 1 15:30:38.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:30:38.343: INFO: namespace kubectl-8492 deletion completed in 22.228642091s • [SLOW TEST:53.419 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:30:38.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 1 15:30:38.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3182' May 1 15:30:38.699: INFO: stderr: "" May 1 15:30:38.699: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 15:30:38.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3182' May 1 15:30:38.837: INFO: stderr: "" May 1 15:30:38.837: INFO: stdout: "update-demo-nautilus-9x6g6 update-demo-nautilus-b2njp " May 1 15:30:38.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9x6g6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3182' May 1 15:30:38.926: INFO: stderr: "" May 1 15:30:38.926: INFO: stdout: "" May 1 15:30:38.926: INFO: update-demo-nautilus-9x6g6 is created but not running May 1 15:30:43.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3182' May 1 15:30:44.029: INFO: stderr: "" May 1 15:30:44.029: INFO: stdout: "update-demo-nautilus-9x6g6 update-demo-nautilus-b2njp " May 1 15:30:44.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9x6g6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3182' May 1 15:30:44.154: INFO: stderr: "" May 1 15:30:44.155: INFO: stdout: "" May 1 15:30:44.155: INFO: update-demo-nautilus-9x6g6 is created but not running May 1 15:30:49.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3182' May 1 15:30:49.317: INFO: stderr: "" May 1 15:30:49.317: INFO: stdout: "update-demo-nautilus-9x6g6 update-demo-nautilus-b2njp " May 1 15:30:49.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9x6g6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3182' May 1 15:30:49.990: INFO: stderr: "" May 1 15:30:49.990: INFO: stdout: "true" May 1 15:30:49.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9x6g6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3182' May 1 15:30:50.083: INFO: stderr: "" May 1 15:30:50.083: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 15:30:50.083: INFO: validating pod update-demo-nautilus-9x6g6 May 1 15:30:50.087: INFO: got data: { "image": "nautilus.jpg" } May 1 15:30:50.087: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 15:30:50.087: INFO: update-demo-nautilus-9x6g6 is verified up and running May 1 15:30:50.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b2njp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3182' May 1 15:30:50.178: INFO: stderr: "" May 1 15:30:50.178: INFO: stdout: "true" May 1 15:30:50.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b2njp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3182' May 1 15:30:50.280: INFO: stderr: "" May 1 15:30:50.280: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 15:30:50.280: INFO: validating pod update-demo-nautilus-b2njp May 1 15:30:50.283: INFO: got data: { "image": "nautilus.jpg" } May 1 15:30:50.283: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 15:30:50.283: INFO: update-demo-nautilus-b2njp is verified up and running STEP: using delete to clean up resources May 1 15:30:50.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3182' May 1 15:30:50.862: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 15:30:50.862: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 1 15:30:50.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3182' May 1 15:30:52.604: INFO: stderr: "No resources found.\n" May 1 15:30:52.604: INFO: stdout: "" May 1 15:30:52.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3182 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 1 15:30:52.750: INFO: stderr: "" May 1 15:30:52.750: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:30:52.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3182" for this suite. May 1 15:31:17.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:31:17.442: INFO: namespace kubectl-3182 deletion completed in 24.688063629s • [SLOW TEST:39.099 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:31:17.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-3499d997-cf27-4d18-b6a9-86b77f0042d3 STEP: Creating a pod to test consume configMaps May 1 15:31:19.520: INFO: Waiting up to 5m0s for pod "pod-configmaps-939311cd-0eb6-4e03-ab41-7869625b16b6" in namespace "configmap-6091" to be "success or failure" May 1 15:31:19.730: INFO: Pod "pod-configmaps-939311cd-0eb6-4e03-ab41-7869625b16b6": Phase="Pending", Reason="", readiness=false. Elapsed: 210.581756ms May 1 15:31:22.676: INFO: Pod "pod-configmaps-939311cd-0eb6-4e03-ab41-7869625b16b6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.156366513s May 1 15:31:24.731: INFO: Pod "pod-configmaps-939311cd-0eb6-4e03-ab41-7869625b16b6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.210936381s May 1 15:31:26.981: INFO: Pod "pod-configmaps-939311cd-0eb6-4e03-ab41-7869625b16b6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.461743685s May 1 15:31:29.065: INFO: Pod "pod-configmaps-939311cd-0eb6-4e03-ab41-7869625b16b6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.545479965s May 1 15:31:31.072: INFO: Pod "pod-configmaps-939311cd-0eb6-4e03-ab41-7869625b16b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.552179773s STEP: Saw pod success May 1 15:31:31.072: INFO: Pod "pod-configmaps-939311cd-0eb6-4e03-ab41-7869625b16b6" satisfied condition "success or failure" May 1 15:31:31.076: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-939311cd-0eb6-4e03-ab41-7869625b16b6 container configmap-volume-test: STEP: delete the pod May 1 15:31:31.354: INFO: Waiting for pod pod-configmaps-939311cd-0eb6-4e03-ab41-7869625b16b6 to disappear May 1 15:31:31.630: INFO: Pod pod-configmaps-939311cd-0eb6-4e03-ab41-7869625b16b6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:31:31.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6091" for this suite. May 1 15:31:39.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:31:40.386: INFO: namespace configmap-6091 deletion completed in 8.753137567s • [SLOW TEST:22.945 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:31:40.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-5658ddb2-d579-49cb-874e-457531aa1c2b STEP: Creating configMap with name cm-test-opt-upd-78de6ca6-f4b1-45b2-bd1c-d7143e45fecb STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5658ddb2-d579-49cb-874e-457531aa1c2b STEP: Updating configmap cm-test-opt-upd-78de6ca6-f4b1-45b2-bd1c-d7143e45fecb STEP: Creating configMap with name cm-test-opt-create-ee4d79b1-37b1-4bfd-afe6-a1b3d1285c13 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:33:00.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-348" for this suite. May 1 15:33:26.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:33:26.775: INFO: namespace configmap-348 deletion completed in 26.33879694s • [SLOW TEST:106.388 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:33:26.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-j58q STEP: Creating a pod to test atomic-volume-subpath May 1 15:33:26.863: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-j58q" in namespace "subpath-6752" to be "success or failure" May 1 15:33:26.883: INFO: Pod "pod-subpath-test-projected-j58q": Phase="Pending", Reason="", readiness=false. Elapsed: 19.35063ms May 1 15:33:28.885: INFO: Pod "pod-subpath-test-projected-j58q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022196061s May 1 15:33:30.890: INFO: Pod "pod-subpath-test-projected-j58q": Phase="Running", Reason="", readiness=true. Elapsed: 4.026421886s May 1 15:33:32.894: INFO: Pod "pod-subpath-test-projected-j58q": Phase="Running", Reason="", readiness=true. Elapsed: 6.030650993s May 1 15:33:34.897: INFO: Pod "pod-subpath-test-projected-j58q": Phase="Running", Reason="", readiness=true. Elapsed: 8.033942925s May 1 15:33:37.031: INFO: Pod "pod-subpath-test-projected-j58q": Phase="Running", Reason="", readiness=true. Elapsed: 10.168241276s May 1 15:33:39.036: INFO: Pod "pod-subpath-test-projected-j58q": Phase="Running", Reason="", readiness=true. Elapsed: 12.172893193s May 1 15:33:41.040: INFO: Pod "pod-subpath-test-projected-j58q": Phase="Running", Reason="", readiness=true. Elapsed: 14.177285651s May 1 15:33:43.045: INFO: Pod "pod-subpath-test-projected-j58q": Phase="Running", Reason="", readiness=true. Elapsed: 16.181971841s May 1 15:33:45.049: INFO: Pod "pod-subpath-test-projected-j58q": Phase="Running", Reason="", readiness=true. Elapsed: 18.186094295s May 1 15:33:47.052: INFO: Pod "pod-subpath-test-projected-j58q": Phase="Running", Reason="", readiness=true. Elapsed: 20.189184084s May 1 15:33:49.057: INFO: Pod "pod-subpath-test-projected-j58q": Phase="Running", Reason="", readiness=true. Elapsed: 22.193336834s May 1 15:33:51.060: INFO: Pod "pod-subpath-test-projected-j58q": Phase="Running", Reason="", readiness=true. Elapsed: 24.196409886s May 1 15:33:53.064: INFO: Pod "pod-subpath-test-projected-j58q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.20069458s STEP: Saw pod success May 1 15:33:53.064: INFO: Pod "pod-subpath-test-projected-j58q" satisfied condition "success or failure" May 1 15:33:53.067: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-j58q container test-container-subpath-projected-j58q: STEP: delete the pod May 1 15:33:53.285: INFO: Waiting for pod pod-subpath-test-projected-j58q to disappear May 1 15:33:53.381: INFO: Pod pod-subpath-test-projected-j58q no longer exists STEP: Deleting pod pod-subpath-test-projected-j58q May 1 15:33:53.381: INFO: Deleting pod "pod-subpath-test-projected-j58q" in namespace "subpath-6752" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:33:53.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6752" for this suite. May 1 15:33:59.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:33:59.579: INFO: namespace subpath-6752 deletion completed in 6.135791549s • [SLOW TEST:32.804 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:33:59.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 1 15:33:59.725: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8676,SelfLink:/api/v1/namespaces/watch-8676/configmaps/e2e-watch-test-label-changed,UID:0ae2652d-4271-4039-8974-fca6ea19a432,ResourceVersion:8456802,Generation:0,CreationTimestamp:2020-05-01 15:33:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 1 15:33:59.725: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8676,SelfLink:/api/v1/namespaces/watch-8676/configmaps/e2e-watch-test-label-changed,UID:0ae2652d-4271-4039-8974-fca6ea19a432,ResourceVersion:8456803,Generation:0,CreationTimestamp:2020-05-01 15:33:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 1 15:33:59.725: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8676,SelfLink:/api/v1/namespaces/watch-8676/configmaps/e2e-watch-test-label-changed,UID:0ae2652d-4271-4039-8974-fca6ea19a432,ResourceVersion:8456804,Generation:0,CreationTimestamp:2020-05-01 15:33:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 1 15:34:09.860: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8676,SelfLink:/api/v1/namespaces/watch-8676/configmaps/e2e-watch-test-label-changed,UID:0ae2652d-4271-4039-8974-fca6ea19a432,ResourceVersion:8456825,Generation:0,CreationTimestamp:2020-05-01 15:33:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 1 15:34:09.860: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8676,SelfLink:/api/v1/namespaces/watch-8676/configmaps/e2e-watch-test-label-changed,UID:0ae2652d-4271-4039-8974-fca6ea19a432,ResourceVersion:8456826,Generation:0,CreationTimestamp:2020-05-01 15:33:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 1 15:34:09.860: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8676,SelfLink:/api/v1/namespaces/watch-8676/configmaps/e2e-watch-test-label-changed,UID:0ae2652d-4271-4039-8974-fca6ea19a432,ResourceVersion:8456827,Generation:0,CreationTimestamp:2020-05-01 15:33:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:34:09.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8676" for this suite. May 1 15:34:17.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:34:18.042: INFO: namespace watch-8676 deletion completed in 8.169767706s • [SLOW TEST:18.463 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:34:18.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 1 15:34:18.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6559' May 1 15:34:19.029: INFO: stderr: "" May 1 15:34:19.029: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 1 15:34:20.037: INFO: Selector matched 1 pods for map[app:redis] May 1 15:34:20.037: INFO: Found 0 / 1 May 1 15:34:21.182: INFO: Selector matched 1 pods for map[app:redis] May 1 15:34:21.182: INFO: Found 0 / 1 May 1 15:34:22.350: INFO: Selector matched 1 pods for map[app:redis] May 1 15:34:22.350: INFO: Found 0 / 1 May 1 15:34:23.038: INFO: Selector matched 1 pods for map[app:redis] May 1 15:34:23.038: INFO: Found 0 / 1 May 1 15:34:24.128: INFO: Selector matched 1 pods for map[app:redis] May 1 15:34:24.128: INFO: Found 0 / 1 May 1 15:34:25.074: INFO: Selector matched 1 pods for map[app:redis] May 1 15:34:25.074: INFO: Found 1 / 1 May 1 15:34:25.074: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 1 15:34:25.152: INFO: Selector matched 1 pods for map[app:redis] May 1 15:34:25.152: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 1 15:34:25.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-w462t --namespace=kubectl-6559 -p {"metadata":{"annotations":{"x":"y"}}}' May 1 15:34:25.298: INFO: stderr: "" May 1 15:34:25.298: INFO: stdout: "pod/redis-master-w462t patched\n" STEP: checking annotations May 1 15:34:25.302: INFO: Selector matched 1 pods for map[app:redis] May 1 15:34:25.302: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:34:25.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6559" for this suite. May 1 15:34:47.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:34:47.574: INFO: namespace kubectl-6559 deletion completed in 22.268617183s • [SLOW TEST:29.531 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:34:47.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-6916 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6916 to expose endpoints map[] May 1 15:34:47.924: INFO: Get endpoints failed (12.696258ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 1 15:34:48.928: INFO: successfully validated that service endpoint-test2 in namespace services-6916 exposes endpoints map[] (1.016611067s elapsed) STEP: Creating pod pod1 in namespace services-6916 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6916 to expose endpoints map[pod1:[80]] May 1 15:34:53.646: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.710874203s elapsed, will retry) May 1 15:34:54.653: INFO: successfully validated that service endpoint-test2 in namespace services-6916 exposes endpoints map[pod1:[80]] (5.718584455s elapsed) STEP: Creating pod pod2 in namespace services-6916 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6916 to expose endpoints map[pod1:[80] pod2:[80]] May 1 15:34:58.782: INFO: Unexpected endpoints: found map[c768410d-78db-43ad-aed0-cbd3c66ef69b:[80]], expected map[pod1:[80] pod2:[80]] (4.124340633s elapsed, will retry) May 1 15:34:59.792: INFO: successfully validated that service endpoint-test2 in namespace services-6916 exposes endpoints map[pod1:[80] pod2:[80]] (5.134512916s elapsed) STEP: Deleting pod pod1 in namespace services-6916 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6916 to expose endpoints map[pod2:[80]] May 1 15:35:00.929: INFO: successfully validated that service endpoint-test2 in namespace services-6916 exposes endpoints map[pod2:[80]] (1.131446957s elapsed) STEP: Deleting pod pod2 in namespace services-6916 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6916 to expose endpoints map[] May 1 15:35:00.969: INFO: successfully validated that service endpoint-test2 in namespace services-6916 exposes endpoints map[] (35.142598ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:35:01.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6916" for this suite. May 1 15:35:23.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:35:23.511: INFO: namespace services-6916 deletion completed in 22.126792437s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:35.937 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:35:23.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-wnnc STEP: Creating a pod to test atomic-volume-subpath May 1 15:35:23.605: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-wnnc" in namespace "subpath-2621" to be "success or failure" May 1 15:35:23.640: INFO: Pod "pod-subpath-test-secret-wnnc": Phase="Pending", Reason="", readiness=false. Elapsed: 34.069656ms May 1 15:35:25.643: INFO: Pod "pod-subpath-test-secret-wnnc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037076012s May 1 15:35:27.647: INFO: Pod "pod-subpath-test-secret-wnnc": Phase="Running", Reason="", readiness=true. Elapsed: 4.040971989s May 1 15:35:29.652: INFO: Pod "pod-subpath-test-secret-wnnc": Phase="Running", Reason="", readiness=true. Elapsed: 6.046157861s May 1 15:35:31.656: INFO: Pod "pod-subpath-test-secret-wnnc": Phase="Running", Reason="", readiness=true. Elapsed: 8.050193351s May 1 15:35:33.660: INFO: Pod "pod-subpath-test-secret-wnnc": Phase="Running", Reason="", readiness=true. Elapsed: 10.054204424s May 1 15:35:35.665: INFO: Pod "pod-subpath-test-secret-wnnc": Phase="Running", Reason="", readiness=true. Elapsed: 12.059822308s May 1 15:35:37.670: INFO: Pod "pod-subpath-test-secret-wnnc": Phase="Running", Reason="", readiness=true. Elapsed: 14.064286262s May 1 15:35:39.675: INFO: Pod "pod-subpath-test-secret-wnnc": Phase="Running", Reason="", readiness=true. Elapsed: 16.069091026s May 1 15:35:41.679: INFO: Pod "pod-subpath-test-secret-wnnc": Phase="Running", Reason="", readiness=true. Elapsed: 18.073502484s May 1 15:35:43.683: INFO: Pod "pod-subpath-test-secret-wnnc": Phase="Running", Reason="", readiness=true. Elapsed: 20.07734625s May 1 15:35:45.687: INFO: Pod "pod-subpath-test-secret-wnnc": Phase="Running", Reason="", readiness=true. Elapsed: 22.081533335s May 1 15:35:47.691: INFO: Pod "pod-subpath-test-secret-wnnc": Phase="Running", Reason="", readiness=true. Elapsed: 24.085006422s May 1 15:35:49.695: INFO: Pod "pod-subpath-test-secret-wnnc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.089750966s STEP: Saw pod success May 1 15:35:49.695: INFO: Pod "pod-subpath-test-secret-wnnc" satisfied condition "success or failure" May 1 15:35:49.699: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-wnnc container test-container-subpath-secret-wnnc: STEP: delete the pod May 1 15:35:50.144: INFO: Waiting for pod pod-subpath-test-secret-wnnc to disappear May 1 15:35:50.387: INFO: Pod pod-subpath-test-secret-wnnc no longer exists STEP: Deleting pod pod-subpath-test-secret-wnnc May 1 15:35:50.387: INFO: Deleting pod "pod-subpath-test-secret-wnnc" in namespace "subpath-2621" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:35:50.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2621" for this suite. May 1 15:35:58.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:35:58.630: INFO: namespace subpath-2621 deletion completed in 8.235160096s • [SLOW TEST:35.119 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:35:58.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode May 1 15:35:58.790: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4322" to be "success or failure" May 1 15:35:58.814: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 24.085038ms May 1 15:36:00.819: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028881601s May 1 15:36:02.822: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032445968s May 1 15:36:04.826: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036159311s May 1 15:36:06.830: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040121052s STEP: Saw pod success May 1 15:36:06.830: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 1 15:36:06.834: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 1 15:36:06.933: INFO: Waiting for pod pod-host-path-test to disappear May 1 15:36:06.945: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:36:06.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4322" for this suite. May 1 15:36:14.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:36:15.057: INFO: namespace hostpath-4322 deletion completed in 8.107708051s • [SLOW TEST:16.426 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:36:15.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 1 15:36:23.927: INFO: Successfully updated pod "annotationupdate3defa6c8-f9f9-4e47-be56-f4d545586c23" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:36:26.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4077" for this suite. May 1 15:36:52.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:36:52.420: INFO: namespace projected-4077 deletion completed in 26.140699212s • [SLOW TEST:37.363 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:36:52.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 15:36:52.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6342' May 1 15:36:53.980: INFO: stderr: "" May 1 15:36:53.980: INFO: stdout: "replicationcontroller/redis-master created\n" May 1 15:36:53.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6342' May 1 15:36:55.821: INFO: stderr: "" May 1 15:36:55.821: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 1 15:36:57.492: INFO: Selector matched 1 pods for map[app:redis] May 1 15:36:57.493: INFO: Found 0 / 1 May 1 15:36:58.047: INFO: Selector matched 1 pods for map[app:redis] May 1 15:36:58.047: INFO: Found 0 / 1 May 1 15:36:59.005: INFO: Selector matched 1 pods for map[app:redis] May 1 15:36:59.005: INFO: Found 0 / 1 May 1 15:36:59.825: INFO: Selector matched 1 pods for map[app:redis] May 1 15:36:59.825: INFO: Found 0 / 1 May 1 15:37:01.161: INFO: Selector matched 1 pods for map[app:redis] May 1 15:37:01.161: INFO: Found 0 / 1 May 1 15:37:01.885: INFO: Selector matched 1 pods for map[app:redis] May 1 15:37:01.885: INFO: Found 0 / 1 May 1 15:37:02.825: INFO: Selector matched 1 pods for map[app:redis] May 1 15:37:02.825: INFO: Found 0 / 1 May 1 15:37:03.826: INFO: Selector matched 1 pods for map[app:redis] May 1 15:37:03.826: INFO: Found 1 / 1 May 1 15:37:03.826: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 1 15:37:03.829: INFO: Selector matched 1 pods for map[app:redis] May 1 15:37:03.829: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 1 15:37:03.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-vbplt --namespace=kubectl-6342' May 1 15:37:03.935: INFO: stderr: "" May 1 15:37:03.935: INFO: stdout: "Name: redis-master-vbplt\nNamespace: kubectl-6342\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Fri, 01 May 2020 15:36:54 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.130\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://3d7e55b6a65fb6aabff94e74db4f56f42015449f387449f86097da650d26397c\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 01 May 2020 15:37:01 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-psgtw (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-psgtw:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-psgtw\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 9s default-scheduler Successfully assigned kubectl-6342/redis-master-vbplt to iruya-worker2\n Normal Pulled 7s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker2 Created container redis-master\n Normal Started 1s kubelet, iruya-worker2 Started container redis-master\n" May 1 15:37:03.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-6342' May 1 15:37:04.061: INFO: stderr: "" May 1 15:37:04.061: INFO: stdout: "Name: redis-master\nNamespace: kubectl-6342\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 11s replication-controller Created pod: redis-master-vbplt\n" May 1 15:37:04.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-6342' May 1 15:37:04.170: INFO: stderr: "" May 1 15:37:04.170: INFO: stdout: "Name: redis-master\nNamespace: kubectl-6342\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.109.226.202\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.130:6379\nSession Affinity: None\nEvents: \n" May 1 15:37:04.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' May 1 15:37:04.293: INFO: stderr: "" May 1 15:37:04.293: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 01 May 2020 15:36:18 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 01 May 2020 15:36:18 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 01 May 2020 15:36:18 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 01 May 2020 15:36:18 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 46d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 46d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 46d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 46d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 1 15:37:04.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-6342' May 1 15:37:04.392: INFO: stderr: "" May 1 15:37:04.392: INFO: stdout: "Name: kubectl-6342\nLabels: e2e-framework=kubectl\n e2e-run=9715c762-004b-4ffc-b2a8-e7d7a88754c5\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:37:04.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6342" for this suite. May 1 15:37:28.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:37:28.787: INFO: namespace kubectl-6342 deletion completed in 24.391957097s • [SLOW TEST:36.367 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:37:28.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 1 15:37:29.141: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c97aed9-d701-4169-88da-4cd078f02fa0" in namespace "projected-3544" to be "success or failure" May 1 15:37:29.170: INFO: Pod "downwardapi-volume-7c97aed9-d701-4169-88da-4cd078f02fa0": Phase="Pending", Reason="", readiness=false. Elapsed: 29.133256ms May 1 15:37:31.233: INFO: Pod "downwardapi-volume-7c97aed9-d701-4169-88da-4cd078f02fa0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091718996s May 1 15:37:33.238: INFO: Pod "downwardapi-volume-7c97aed9-d701-4169-88da-4cd078f02fa0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096475657s May 1 15:37:35.323: INFO: Pod "downwardapi-volume-7c97aed9-d701-4169-88da-4cd078f02fa0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181453038s May 1 15:37:37.326: INFO: Pod "downwardapi-volume-7c97aed9-d701-4169-88da-4cd078f02fa0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.18490898s STEP: Saw pod success May 1 15:37:37.326: INFO: Pod "downwardapi-volume-7c97aed9-d701-4169-88da-4cd078f02fa0" satisfied condition "success or failure" May 1 15:37:37.329: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7c97aed9-d701-4169-88da-4cd078f02fa0 container client-container: STEP: delete the pod May 1 15:37:37.426: INFO: Waiting for pod downwardapi-volume-7c97aed9-d701-4169-88da-4cd078f02fa0 to disappear May 1 15:37:37.430: INFO: Pod downwardapi-volume-7c97aed9-d701-4169-88da-4cd078f02fa0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:37:37.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3544" for this suite. May 1 15:37:43.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:37:43.585: INFO: namespace projected-3544 deletion completed in 6.151778337s • [SLOW TEST:14.797 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:37:43.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 1 15:37:43.704: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e8928d2-2490-4c12-91c8-4f7e27ad30f5" in namespace "projected-7152" to be "success or failure" May 1 15:37:43.706: INFO: Pod "downwardapi-volume-1e8928d2-2490-4c12-91c8-4f7e27ad30f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.500431ms May 1 15:37:45.710: INFO: Pod "downwardapi-volume-1e8928d2-2490-4c12-91c8-4f7e27ad30f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006597047s May 1 15:37:48.071: INFO: Pod "downwardapi-volume-1e8928d2-2490-4c12-91c8-4f7e27ad30f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.367243099s May 1 15:37:50.075: INFO: Pod "downwardapi-volume-1e8928d2-2490-4c12-91c8-4f7e27ad30f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.371715379s STEP: Saw pod success May 1 15:37:50.075: INFO: Pod "downwardapi-volume-1e8928d2-2490-4c12-91c8-4f7e27ad30f5" satisfied condition "success or failure" May 1 15:37:50.078: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-1e8928d2-2490-4c12-91c8-4f7e27ad30f5 container client-container: STEP: delete the pod May 1 15:37:50.110: INFO: Waiting for pod downwardapi-volume-1e8928d2-2490-4c12-91c8-4f7e27ad30f5 to disappear May 1 15:37:50.125: INFO: Pod downwardapi-volume-1e8928d2-2490-4c12-91c8-4f7e27ad30f5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:37:50.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7152" for this suite. May 1 15:37:56.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:37:56.343: INFO: namespace projected-7152 deletion completed in 6.21475125s • [SLOW TEST:12.758 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:37:56.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8846 STEP: creating a selector STEP: Creating the service pods in kubernetes May 1 15:37:56.497: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 1 15:38:29.566: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.132 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8846 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 15:38:29.566: INFO: >>> kubeConfig: /root/.kube/config I0501 15:38:29.591851 6 log.go:172] (0xc0009fa210) (0xc0012266e0) Create stream I0501 15:38:29.591884 6 log.go:172] (0xc0009fa210) (0xc0012266e0) Stream added, broadcasting: 1 I0501 15:38:29.593569 6 log.go:172] (0xc0009fa210) Reply frame received for 1 I0501 15:38:29.593597 6 log.go:172] (0xc0009fa210) (0xc00235a140) Create stream I0501 15:38:29.593609 6 log.go:172] (0xc0009fa210) (0xc00235a140) Stream added, broadcasting: 3 I0501 15:38:29.594210 6 log.go:172] (0xc0009fa210) Reply frame received for 3 I0501 15:38:29.594242 6 log.go:172] (0xc0009fa210) (0xc000396000) Create stream I0501 15:38:29.594252 6 log.go:172] (0xc0009fa210) (0xc000396000) Stream added, broadcasting: 5 I0501 15:38:29.594992 6 log.go:172] (0xc0009fa210) Reply frame received for 5 I0501 15:38:30.689688 6 log.go:172] (0xc0009fa210) Data frame received for 5 I0501 15:38:30.689750 6 log.go:172] (0xc000396000) (5) Data frame handling I0501 15:38:30.689781 6 log.go:172] (0xc0009fa210) Data frame received for 3 I0501 15:38:30.689793 6 log.go:172] (0xc00235a140) (3) Data frame handling I0501 15:38:30.689805 6 log.go:172] (0xc00235a140) (3) Data frame sent I0501 15:38:30.689860 6 log.go:172] (0xc0009fa210) Data frame received for 3 I0501 15:38:30.689871 6 log.go:172] (0xc00235a140) (3) Data frame handling I0501 15:38:30.692026 6 log.go:172] (0xc0009fa210) Data frame received for 1 I0501 15:38:30.692044 6 log.go:172] (0xc0012266e0) (1) Data frame handling I0501 15:38:30.692065 6 log.go:172] (0xc0012266e0) (1) Data frame sent I0501 15:38:30.692076 6 log.go:172] (0xc0009fa210) (0xc0012266e0) Stream removed, broadcasting: 1 I0501 15:38:30.692088 6 log.go:172] (0xc0009fa210) Go away received I0501 15:38:30.692503 6 log.go:172] (0xc0009fa210) (0xc0012266e0) Stream removed, broadcasting: 1 I0501 15:38:30.692528 6 log.go:172] (0xc0009fa210) (0xc00235a140) Stream removed, broadcasting: 3 I0501 15:38:30.692544 6 log.go:172] (0xc0009fa210) (0xc000396000) Stream removed, broadcasting: 5 May 1 15:38:30.692: INFO: Found all expected endpoints: [netserver-0] May 1 15:38:30.696: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.250 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8846 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 15:38:30.696: INFO: >>> kubeConfig: /root/.kube/config I0501 15:38:30.720746 6 log.go:172] (0xc00023ce70) (0xc000396be0) Create stream I0501 15:38:30.720832 6 log.go:172] (0xc00023ce70) (0xc000396be0) Stream added, broadcasting: 1 I0501 15:38:30.723111 6 log.go:172] (0xc00023ce70) Reply frame received for 1 I0501 15:38:30.723182 6 log.go:172] (0xc00023ce70) (0xc00235a1e0) Create stream I0501 15:38:30.723215 6 log.go:172] (0xc00023ce70) (0xc00235a1e0) Stream added, broadcasting: 3 I0501 15:38:30.724051 6 log.go:172] (0xc00023ce70) Reply frame received for 3 I0501 15:38:30.724091 6 log.go:172] (0xc00023ce70) (0xc00021a000) Create stream I0501 15:38:30.724101 6 log.go:172] (0xc00023ce70) (0xc00021a000) Stream added, broadcasting: 5 I0501 15:38:30.724955 6 log.go:172] (0xc00023ce70) Reply frame received for 5 I0501 15:38:31.788854 6 log.go:172] (0xc00023ce70) Data frame received for 3 I0501 15:38:31.788889 6 log.go:172] (0xc00235a1e0) (3) Data frame handling I0501 15:38:31.788914 6 log.go:172] (0xc00235a1e0) (3) Data frame sent I0501 15:38:31.789339 6 log.go:172] (0xc00023ce70) Data frame received for 5 I0501 15:38:31.789374 6 log.go:172] (0xc00021a000) (5) Data frame handling I0501 15:38:31.789585 6 log.go:172] (0xc00023ce70) Data frame received for 3 I0501 15:38:31.789601 6 log.go:172] (0xc00235a1e0) (3) Data frame handling I0501 15:38:31.791021 6 log.go:172] (0xc00023ce70) Data frame received for 1 I0501 15:38:31.791045 6 log.go:172] (0xc000396be0) (1) Data frame handling I0501 15:38:31.791068 6 log.go:172] (0xc000396be0) (1) Data frame sent I0501 15:38:31.791142 6 log.go:172] (0xc00023ce70) (0xc000396be0) Stream removed, broadcasting: 1 I0501 15:38:31.791303 6 log.go:172] (0xc00023ce70) (0xc000396be0) Stream removed, broadcasting: 1 I0501 15:38:31.791322 6 log.go:172] (0xc00023ce70) (0xc00235a1e0) Stream removed, broadcasting: 3 I0501 15:38:31.791471 6 log.go:172] (0xc00023ce70) Go away received I0501 15:38:31.791526 6 log.go:172] (0xc00023ce70) (0xc00021a000) Stream removed, broadcasting: 5 May 1 15:38:31.791: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:38:31.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8846" for this suite. May 1 15:38:59.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:38:59.950: INFO: namespace pod-network-test-8846 deletion completed in 28.154113232s • [SLOW TEST:63.607 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:38:59.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 1 15:39:12.651: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 1 15:39:13.118: INFO: Pod pod-with-poststart-http-hook still exists May 1 15:39:15.118: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 1 15:39:15.123: INFO: Pod pod-with-poststart-http-hook still exists May 1 15:39:17.118: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 1 15:39:17.122: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:39:17.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4830" for this suite. May 1 15:39:43.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:39:43.246: INFO: namespace container-lifecycle-hook-4830 deletion completed in 26.122208291s • [SLOW TEST:43.296 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:39:43.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:39:50.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9724" for this suite. May 1 15:40:30.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:40:30.780: INFO: namespace kubelet-test-9724 deletion completed in 40.699827613s • [SLOW TEST:47.533 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:40:30.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 1 15:40:31.358: INFO: Waiting up to 5m0s for pod "pod-8a999b44-3c1f-4bce-b88b-f77c07e33aa2" in namespace "emptydir-9980" to be "success or failure" May 1 15:40:31.684: INFO: Pod "pod-8a999b44-3c1f-4bce-b88b-f77c07e33aa2": Phase="Pending", Reason="", readiness=false. Elapsed: 326.249378ms May 1 15:40:33.691: INFO: Pod "pod-8a999b44-3c1f-4bce-b88b-f77c07e33aa2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.333365525s May 1 15:40:35.696: INFO: Pod "pod-8a999b44-3c1f-4bce-b88b-f77c07e33aa2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337454907s May 1 15:40:37.700: INFO: Pod "pod-8a999b44-3c1f-4bce-b88b-f77c07e33aa2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.341576953s STEP: Saw pod success May 1 15:40:37.700: INFO: Pod "pod-8a999b44-3c1f-4bce-b88b-f77c07e33aa2" satisfied condition "success or failure" May 1 15:40:37.703: INFO: Trying to get logs from node iruya-worker pod pod-8a999b44-3c1f-4bce-b88b-f77c07e33aa2 container test-container: STEP: delete the pod May 1 15:40:37.757: INFO: Waiting for pod pod-8a999b44-3c1f-4bce-b88b-f77c07e33aa2 to disappear May 1 15:40:37.769: INFO: Pod pod-8a999b44-3c1f-4bce-b88b-f77c07e33aa2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:40:37.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9980" for this suite. May 1 15:40:43.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:40:44.124: INFO: namespace emptydir-9980 deletion completed in 6.3520119s • [SLOW TEST:13.344 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:40:44.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-8d0f0a89-11a9-4257-8546-4e5b9f20ee7b STEP: Creating a pod to test consume secrets May 1 15:40:44.475: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6cb0b592-cb00-42a9-9fa8-154e3d7671e7" in namespace "projected-4339" to be "success or failure" May 1 15:40:44.479: INFO: Pod "pod-projected-secrets-6cb0b592-cb00-42a9-9fa8-154e3d7671e7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.389473ms May 1 15:40:46.484: INFO: Pod "pod-projected-secrets-6cb0b592-cb00-42a9-9fa8-154e3d7671e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008427036s May 1 15:40:48.488: INFO: Pod "pod-projected-secrets-6cb0b592-cb00-42a9-9fa8-154e3d7671e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013005896s May 1 15:40:50.492: INFO: Pod "pod-projected-secrets-6cb0b592-cb00-42a9-9fa8-154e3d7671e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016722324s STEP: Saw pod success May 1 15:40:50.492: INFO: Pod "pod-projected-secrets-6cb0b592-cb00-42a9-9fa8-154e3d7671e7" satisfied condition "success or failure" May 1 15:40:50.495: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-6cb0b592-cb00-42a9-9fa8-154e3d7671e7 container projected-secret-volume-test: STEP: delete the pod May 1 15:40:50.590: INFO: Waiting for pod pod-projected-secrets-6cb0b592-cb00-42a9-9fa8-154e3d7671e7 to disappear May 1 15:40:50.596: INFO: Pod pod-projected-secrets-6cb0b592-cb00-42a9-9fa8-154e3d7671e7 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:40:50.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4339" for this suite. May 1 15:40:56.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:40:56.723: INFO: namespace projected-4339 deletion completed in 6.124294957s • [SLOW TEST:12.599 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:40:56.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 1 15:40:56.802: INFO: Waiting up to 5m0s for pod "pod-4f8404d2-9572-46f2-9ba8-d2e1c53b3ae0" in namespace "emptydir-9957" to be "success or failure" May 1 15:40:56.868: INFO: Pod "pod-4f8404d2-9572-46f2-9ba8-d2e1c53b3ae0": Phase="Pending", Reason="", readiness=false. Elapsed: 65.50134ms May 1 15:40:58.872: INFO: Pod "pod-4f8404d2-9572-46f2-9ba8-d2e1c53b3ae0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069240064s May 1 15:41:00.876: INFO: Pod "pod-4f8404d2-9572-46f2-9ba8-d2e1c53b3ae0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073596416s STEP: Saw pod success May 1 15:41:00.876: INFO: Pod "pod-4f8404d2-9572-46f2-9ba8-d2e1c53b3ae0" satisfied condition "success or failure" May 1 15:41:00.879: INFO: Trying to get logs from node iruya-worker pod pod-4f8404d2-9572-46f2-9ba8-d2e1c53b3ae0 container test-container: STEP: delete the pod May 1 15:41:01.036: INFO: Waiting for pod pod-4f8404d2-9572-46f2-9ba8-d2e1c53b3ae0 to disappear May 1 15:41:01.280: INFO: Pod pod-4f8404d2-9572-46f2-9ba8-d2e1c53b3ae0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:41:01.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9957" for this suite. May 1 15:41:07.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:41:07.438: INFO: namespace emptydir-9957 deletion completed in 6.154085939s • [SLOW TEST:10.714 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:41:07.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server May 1 15:41:07.478: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:41:07.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7632" for this suite. May 1 15:41:13.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:41:13.636: INFO: namespace kubectl-7632 deletion completed in 6.078076217s • [SLOW TEST:6.197 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:41:13.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info May 1 15:41:14.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 1 15:41:35.124: INFO: stderr: "" May 1 15:41:35.124: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:41:35.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2214" for this suite. May 1 15:41:41.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:41:41.894: INFO: namespace kubectl-2214 deletion completed in 6.766012109s • [SLOW TEST:28.259 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:41:41.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-69a4ee1a-698a-4392-bc6a-07ca3d49a2a4 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-69a4ee1a-698a-4392-bc6a-07ca3d49a2a4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:41:50.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4406" for this suite. May 1 15:42:17.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:42:17.481: INFO: namespace configmap-4406 deletion completed in 26.285778806s • [SLOW TEST:35.586 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:42:17.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 1 15:42:18.174: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:18.196: INFO: Number of nodes with available pods: 0 May 1 15:42:18.196: INFO: Node iruya-worker is running more than one daemon pod May 1 15:42:19.200: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:19.202: INFO: Number of nodes with available pods: 0 May 1 15:42:19.202: INFO: Node iruya-worker is running more than one daemon pod May 1 15:42:20.393: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:20.396: INFO: Number of nodes with available pods: 0 May 1 15:42:20.396: INFO: Node iruya-worker is running more than one daemon pod May 1 15:42:21.416: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:21.461: INFO: Number of nodes with available pods: 0 May 1 15:42:21.461: INFO: Node iruya-worker is running more than one daemon pod May 1 15:42:22.238: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:22.241: INFO: Number of nodes with available pods: 0 May 1 15:42:22.241: INFO: Node iruya-worker is running more than one daemon pod May 1 15:42:23.368: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:23.809: INFO: Number of nodes with available pods: 0 May 1 15:42:23.809: INFO: Node iruya-worker is running more than one daemon pod May 1 15:42:24.201: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:24.204: INFO: Number of nodes with available pods: 0 May 1 15:42:24.204: INFO: Node iruya-worker is running more than one daemon pod May 1 15:42:25.297: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:25.299: INFO: Number of nodes with available pods: 0 May 1 15:42:25.299: INFO: Node iruya-worker is running more than one daemon pod May 1 15:42:26.234: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:26.239: INFO: Number of nodes with available pods: 0 May 1 15:42:26.239: INFO: Node iruya-worker is running more than one daemon pod May 1 15:42:27.375: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:27.449: INFO: Number of nodes with available pods: 1 May 1 15:42:27.449: INFO: Node iruya-worker is running more than one daemon pod May 1 15:42:28.201: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:28.204: INFO: Number of nodes with available pods: 1 May 1 15:42:28.204: INFO: Node iruya-worker is running more than one daemon pod May 1 15:42:29.562: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:29.770: INFO: Number of nodes with available pods: 2 May 1 15:42:29.770: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 1 15:42:30.699: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:30.702: INFO: Number of nodes with available pods: 1 May 1 15:42:30.702: INFO: Node iruya-worker2 is running more than one daemon pod May 1 15:42:31.951: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:32.049: INFO: Number of nodes with available pods: 1 May 1 15:42:32.049: INFO: Node iruya-worker2 is running more than one daemon pod May 1 15:42:33.039: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:33.423: INFO: Number of nodes with available pods: 1 May 1 15:42:33.423: INFO: Node iruya-worker2 is running more than one daemon pod May 1 15:42:33.706: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:33.710: INFO: Number of nodes with available pods: 1 May 1 15:42:33.710: INFO: Node iruya-worker2 is running more than one daemon pod May 1 15:42:34.707: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:34.710: INFO: Number of nodes with available pods: 1 May 1 15:42:34.710: INFO: Node iruya-worker2 is running more than one daemon pod May 1 15:42:35.705: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:35.708: INFO: Number of nodes with available pods: 1 May 1 15:42:35.708: INFO: Node iruya-worker2 is running more than one daemon pod May 1 15:42:36.706: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:36.709: INFO: Number of nodes with available pods: 1 May 1 15:42:36.709: INFO: Node iruya-worker2 is running more than one daemon pod May 1 15:42:38.022: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:38.393: INFO: Number of nodes with available pods: 1 May 1 15:42:38.393: INFO: Node iruya-worker2 is running more than one daemon pod May 1 15:42:38.706: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:38.709: INFO: Number of nodes with available pods: 1 May 1 15:42:38.709: INFO: Node iruya-worker2 is running more than one daemon pod May 1 15:42:39.759: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:39.763: INFO: Number of nodes with available pods: 1 May 1 15:42:39.763: INFO: Node iruya-worker2 is running more than one daemon pod May 1 15:42:40.706: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:40.710: INFO: Number of nodes with available pods: 1 May 1 15:42:40.710: INFO: Node iruya-worker2 is running more than one daemon pod May 1 15:42:41.843: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:41.881: INFO: Number of nodes with available pods: 1 May 1 15:42:41.881: INFO: Node iruya-worker2 is running more than one daemon pod May 1 15:42:42.707: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:42.710: INFO: Number of nodes with available pods: 1 May 1 15:42:42.710: INFO: Node iruya-worker2 is running more than one daemon pod May 1 15:42:43.890: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:43.894: INFO: Number of nodes with available pods: 1 May 1 15:42:43.894: INFO: Node iruya-worker2 is running more than one daemon pod May 1 15:42:44.707: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:44.711: INFO: Number of nodes with available pods: 1 May 1 15:42:44.711: INFO: Node iruya-worker2 is running more than one daemon pod May 1 15:42:45.850: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:45.853: INFO: Number of nodes with available pods: 1 May 1 15:42:45.853: INFO: Node iruya-worker2 is running more than one daemon pod May 1 15:42:46.707: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:46.711: INFO: Number of nodes with available pods: 1 May 1 15:42:46.711: INFO: Node iruya-worker2 is running more than one daemon pod May 1 15:42:47.706: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:47.709: INFO: Number of nodes with available pods: 1 May 1 15:42:47.709: INFO: Node iruya-worker2 is running more than one daemon pod May 1 15:42:48.706: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 15:42:48.709: INFO: Number of nodes with available pods: 2 May 1 15:42:48.709: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7169, will wait for the garbage collector to delete the pods May 1 15:42:48.772: INFO: Deleting DaemonSet.extensions daemon-set took: 7.436231ms May 1 15:42:49.072: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.274601ms May 1 15:43:03.944: INFO: Number of nodes with available pods: 0 May 1 15:43:03.944: INFO: Number of running nodes: 0, number of available pods: 0 May 1 15:43:03.951: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7169/daemonsets","resourceVersion":"8458418"},"items":null} May 1 15:43:04.267: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7169/pods","resourceVersion":"8458419"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:43:04.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7169" for this suite. May 1 15:43:10.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:43:10.540: INFO: namespace daemonsets-7169 deletion completed in 6.176655673s • [SLOW TEST:53.059 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:43:10.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-578e7b16-a913-4baa-a345-bfbfcaf11aa8 STEP: Creating secret with name secret-projected-all-test-volume-e20f5370-f8a7-4c30-9294-0b9cce92e8fc STEP: Creating a pod to test Check all projections for projected volume plugin May 1 15:43:10.613: INFO: Waiting up to 5m0s for pod "projected-volume-ae3b81c0-2663-4161-85da-e2b5fe398a9b" in namespace "projected-5613" to be "success or failure" May 1 15:43:10.623: INFO: Pod "projected-volume-ae3b81c0-2663-4161-85da-e2b5fe398a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.504112ms May 1 15:43:12.708: INFO: Pod "projected-volume-ae3b81c0-2663-4161-85da-e2b5fe398a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095119436s May 1 15:43:14.713: INFO: Pod "projected-volume-ae3b81c0-2663-4161-85da-e2b5fe398a9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100434034s STEP: Saw pod success May 1 15:43:14.713: INFO: Pod "projected-volume-ae3b81c0-2663-4161-85da-e2b5fe398a9b" satisfied condition "success or failure" May 1 15:43:14.716: INFO: Trying to get logs from node iruya-worker pod projected-volume-ae3b81c0-2663-4161-85da-e2b5fe398a9b container projected-all-volume-test: STEP: delete the pod May 1 15:43:14.753: INFO: Waiting for pod projected-volume-ae3b81c0-2663-4161-85da-e2b5fe398a9b to disappear May 1 15:43:14.761: INFO: Pod projected-volume-ae3b81c0-2663-4161-85da-e2b5fe398a9b no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:43:14.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5613" for this suite. May 1 15:43:20.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:43:20.867: INFO: namespace projected-5613 deletion completed in 6.102237315s • [SLOW TEST:10.325 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:43:20.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:43:20.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3857" for this suite. May 1 15:43:47.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:43:47.705: INFO: namespace pods-3857 deletion completed in 26.746903279s • [SLOW TEST:26.837 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:43:47.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2041 STEP: creating a selector STEP: Creating the service pods in kubernetes May 1 15:43:48.854: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 1 15:44:24.333: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.5:8080/dial?request=hostName&protocol=http&host=10.244.2.4&port=8080&tries=1'] Namespace:pod-network-test-2041 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 15:44:24.333: INFO: >>> kubeConfig: /root/.kube/config I0501 15:44:24.358924 6 log.go:172] (0xc0009cda20) (0xc0010e8820) Create stream I0501 15:44:24.358968 6 log.go:172] (0xc0009cda20) (0xc0010e8820) Stream added, broadcasting: 1 I0501 15:44:24.362477 6 log.go:172] (0xc0009cda20) Reply frame received for 1 I0501 15:44:24.362508 6 log.go:172] (0xc0009cda20) (0xc0005be000) Create stream I0501 15:44:24.362519 6 log.go:172] (0xc0009cda20) (0xc0005be000) Stream added, broadcasting: 3 I0501 15:44:24.363342 6 log.go:172] (0xc0009cda20) Reply frame received for 3 I0501 15:44:24.363363 6 log.go:172] (0xc0009cda20) (0xc0005be1e0) Create stream I0501 15:44:24.363371 6 log.go:172] (0xc0009cda20) (0xc0005be1e0) Stream added, broadcasting: 5 I0501 15:44:24.364143 6 log.go:172] (0xc0009cda20) Reply frame received for 5 I0501 15:44:24.533884 6 log.go:172] (0xc0009cda20) Data frame received for 3 I0501 15:44:24.533936 6 log.go:172] (0xc0005be000) (3) Data frame handling I0501 15:44:24.533965 6 log.go:172] (0xc0005be000) (3) Data frame sent I0501 15:44:24.534244 6 log.go:172] (0xc0009cda20) Data frame received for 3 I0501 15:44:24.534272 6 log.go:172] (0xc0005be000) (3) Data frame handling I0501 15:44:24.534287 6 log.go:172] (0xc0009cda20) Data frame received for 5 I0501 15:44:24.534311 6 log.go:172] (0xc0005be1e0) (5) Data frame handling I0501 15:44:24.536613 6 log.go:172] (0xc0009cda20) Data frame received for 1 I0501 15:44:24.536640 6 log.go:172] (0xc0010e8820) (1) Data frame handling I0501 15:44:24.536672 6 log.go:172] (0xc0010e8820) (1) Data frame sent I0501 15:44:24.536697 6 log.go:172] (0xc0009cda20) (0xc0010e8820) Stream removed, broadcasting: 1 I0501 15:44:24.536728 6 log.go:172] (0xc0009cda20) Go away received I0501 15:44:24.537059 6 log.go:172] (0xc0009cda20) (0xc0010e8820) Stream removed, broadcasting: 1 I0501 15:44:24.537078 6 log.go:172] (0xc0009cda20) (0xc0005be000) Stream removed, broadcasting: 3 I0501 15:44:24.537088 6 log.go:172] (0xc0009cda20) (0xc0005be1e0) Stream removed, broadcasting: 5 May 1 15:44:24.537: INFO: Waiting for endpoints: map[] May 1 15:44:24.541: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.5:8080/dial?request=hostName&protocol=http&host=10.244.1.140&port=8080&tries=1'] Namespace:pod-network-test-2041 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 15:44:24.541: INFO: >>> kubeConfig: /root/.kube/config I0501 15:44:24.577929 6 log.go:172] (0xc0009fa840) (0xc0010e94a0) Create stream I0501 15:44:24.577970 6 log.go:172] (0xc0009fa840) (0xc0010e94a0) Stream added, broadcasting: 1 I0501 15:44:24.582072 6 log.go:172] (0xc0009fa840) Reply frame received for 1 I0501 15:44:24.582136 6 log.go:172] (0xc0009fa840) (0xc0005be280) Create stream I0501 15:44:24.582291 6 log.go:172] (0xc0009fa840) (0xc0005be280) Stream added, broadcasting: 3 I0501 15:44:24.583370 6 log.go:172] (0xc0009fa840) Reply frame received for 3 I0501 15:44:24.583421 6 log.go:172] (0xc0009fa840) (0xc0009a2140) Create stream I0501 15:44:24.583439 6 log.go:172] (0xc0009fa840) (0xc0009a2140) Stream added, broadcasting: 5 I0501 15:44:24.584333 6 log.go:172] (0xc0009fa840) Reply frame received for 5 I0501 15:44:24.656849 6 log.go:172] (0xc0009fa840) Data frame received for 3 I0501 15:44:24.656882 6 log.go:172] (0xc0005be280) (3) Data frame handling I0501 15:44:24.656906 6 log.go:172] (0xc0005be280) (3) Data frame sent I0501 15:44:24.657542 6 log.go:172] (0xc0009fa840) Data frame received for 3 I0501 15:44:24.657621 6 log.go:172] (0xc0005be280) (3) Data frame handling I0501 15:44:24.657952 6 log.go:172] (0xc0009fa840) Data frame received for 5 I0501 15:44:24.657976 6 log.go:172] (0xc0009a2140) (5) Data frame handling I0501 15:44:24.659673 6 log.go:172] (0xc0009fa840) Data frame received for 1 I0501 15:44:24.659703 6 log.go:172] (0xc0010e94a0) (1) Data frame handling I0501 15:44:24.659736 6 log.go:172] (0xc0010e94a0) (1) Data frame sent I0501 15:44:24.659794 6 log.go:172] (0xc0009fa840) (0xc0010e94a0) Stream removed, broadcasting: 1 I0501 15:44:24.659964 6 log.go:172] (0xc0009fa840) (0xc0010e94a0) Stream removed, broadcasting: 1 I0501 15:44:24.659990 6 log.go:172] (0xc0009fa840) (0xc0005be280) Stream removed, broadcasting: 3 I0501 15:44:24.660000 6 log.go:172] (0xc0009fa840) (0xc0009a2140) Stream removed, broadcasting: 5 May 1 15:44:24.660: INFO: Waiting for endpoints: map[] I0501 15:44:24.660142 6 log.go:172] (0xc0009fa840) Go away received [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:44:24.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2041" for this suite. May 1 15:44:48.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:44:48.753: INFO: namespace pod-network-test-2041 deletion completed in 24.089007219s • [SLOW TEST:61.047 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:44:48.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-5ec99726-0494-441b-9203-8fc448ca86ce STEP: Creating a pod to test consume secrets May 1 15:44:49.263: INFO: Waiting up to 5m0s for pod "pod-secrets-55261b7d-339d-473c-a9bc-1ca0dd25cceb" in namespace "secrets-9677" to be "success or failure" May 1 15:44:49.267: INFO: Pod "pod-secrets-55261b7d-339d-473c-a9bc-1ca0dd25cceb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.565509ms May 1 15:44:51.270: INFO: Pod "pod-secrets-55261b7d-339d-473c-a9bc-1ca0dd25cceb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006943224s May 1 15:44:53.274: INFO: Pod "pod-secrets-55261b7d-339d-473c-a9bc-1ca0dd25cceb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010618939s STEP: Saw pod success May 1 15:44:53.274: INFO: Pod "pod-secrets-55261b7d-339d-473c-a9bc-1ca0dd25cceb" satisfied condition "success or failure" May 1 15:44:53.284: INFO: Trying to get logs from node iruya-worker pod pod-secrets-55261b7d-339d-473c-a9bc-1ca0dd25cceb container secret-volume-test: STEP: delete the pod May 1 15:44:53.706: INFO: Waiting for pod pod-secrets-55261b7d-339d-473c-a9bc-1ca0dd25cceb to disappear May 1 15:44:53.710: INFO: Pod pod-secrets-55261b7d-339d-473c-a9bc-1ca0dd25cceb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:44:53.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9677" for this suite. May 1 15:44:59.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:45:00.048: INFO: namespace secrets-9677 deletion completed in 6.289954212s • [SLOW TEST:11.295 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:45:00.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 1 15:45:00.334: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b99501a1-0379-4a14-9ab5-595a0aca9372" in namespace "projected-3115" to be "success or failure" May 1 15:45:00.359: INFO: Pod "downwardapi-volume-b99501a1-0379-4a14-9ab5-595a0aca9372": Phase="Pending", Reason="", readiness=false. Elapsed: 24.880358ms May 1 15:45:02.363: INFO: Pod "downwardapi-volume-b99501a1-0379-4a14-9ab5-595a0aca9372": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028056465s May 1 15:45:04.367: INFO: Pod "downwardapi-volume-b99501a1-0379-4a14-9ab5-595a0aca9372": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032101371s May 1 15:45:06.767: INFO: Pod "downwardapi-volume-b99501a1-0379-4a14-9ab5-595a0aca9372": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432147861s May 1 15:45:08.771: INFO: Pod "downwardapi-volume-b99501a1-0379-4a14-9ab5-595a0aca9372": Phase="Pending", Reason="", readiness=false. Elapsed: 8.436788067s May 1 15:45:10.776: INFO: Pod "downwardapi-volume-b99501a1-0379-4a14-9ab5-595a0aca9372": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.441134829s STEP: Saw pod success May 1 15:45:10.776: INFO: Pod "downwardapi-volume-b99501a1-0379-4a14-9ab5-595a0aca9372" satisfied condition "success or failure" May 1 15:45:10.779: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b99501a1-0379-4a14-9ab5-595a0aca9372 container client-container: STEP: delete the pod May 1 15:45:11.374: INFO: Waiting for pod downwardapi-volume-b99501a1-0379-4a14-9ab5-595a0aca9372 to disappear May 1 15:45:11.415: INFO: Pod downwardapi-volume-b99501a1-0379-4a14-9ab5-595a0aca9372 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:45:11.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3115" for this suite. May 1 15:45:17.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:45:17.773: INFO: namespace projected-3115 deletion completed in 6.353615351s • [SLOW TEST:17.724 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:45:17.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 1 15:45:17.862: INFO: Waiting up to 5m0s for pod "downwardapi-volume-213bbe26-6e6c-4a7c-a53f-07ee936e441a" in namespace "downward-api-4727" to be "success or failure" May 1 15:45:17.865: INFO: Pod "downwardapi-volume-213bbe26-6e6c-4a7c-a53f-07ee936e441a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.86954ms May 1 15:45:19.869: INFO: Pod "downwardapi-volume-213bbe26-6e6c-4a7c-a53f-07ee936e441a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007087991s May 1 15:45:21.874: INFO: Pod "downwardapi-volume-213bbe26-6e6c-4a7c-a53f-07ee936e441a": Phase="Running", Reason="", readiness=true. Elapsed: 4.012126225s May 1 15:45:23.878: INFO: Pod "downwardapi-volume-213bbe26-6e6c-4a7c-a53f-07ee936e441a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016029793s STEP: Saw pod success May 1 15:45:23.878: INFO: Pod "downwardapi-volume-213bbe26-6e6c-4a7c-a53f-07ee936e441a" satisfied condition "success or failure" May 1 15:45:23.881: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-213bbe26-6e6c-4a7c-a53f-07ee936e441a container client-container: STEP: delete the pod May 1 15:45:23.914: INFO: Waiting for pod downwardapi-volume-213bbe26-6e6c-4a7c-a53f-07ee936e441a to disappear May 1 15:45:24.078: INFO: Pod downwardapi-volume-213bbe26-6e6c-4a7c-a53f-07ee936e441a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:45:24.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4727" for this suite. May 1 15:45:30.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:45:30.182: INFO: namespace downward-api-4727 deletion completed in 6.099534555s • [SLOW TEST:12.409 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:45:30.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-b5911122-a0f7-4a7e-bcab-699d343b4936 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-b5911122-a0f7-4a7e-bcab-699d343b4936 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:46:53.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-113" for this suite. May 1 15:47:17.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:47:17.883: INFO: namespace projected-113 deletion completed in 24.084705294s • [SLOW TEST:107.701 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:47:17.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-816eda1e-9ef3-4f7e-826a-70e3656061ed STEP: Creating a pod to test consume configMaps May 1 15:47:18.303: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-93ff2222-df2e-45fe-b293-b16e5283ba2c" in namespace "projected-9616" to be "success or failure" May 1 15:47:18.343: INFO: Pod "pod-projected-configmaps-93ff2222-df2e-45fe-b293-b16e5283ba2c": Phase="Pending", Reason="", readiness=false. Elapsed: 40.285711ms May 1 15:47:20.346: INFO: Pod "pod-projected-configmaps-93ff2222-df2e-45fe-b293-b16e5283ba2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043759613s May 1 15:47:22.457: INFO: Pod "pod-projected-configmaps-93ff2222-df2e-45fe-b293-b16e5283ba2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154375126s STEP: Saw pod success May 1 15:47:22.457: INFO: Pod "pod-projected-configmaps-93ff2222-df2e-45fe-b293-b16e5283ba2c" satisfied condition "success or failure" May 1 15:47:22.460: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-93ff2222-df2e-45fe-b293-b16e5283ba2c container projected-configmap-volume-test: STEP: delete the pod May 1 15:47:22.492: INFO: Waiting for pod pod-projected-configmaps-93ff2222-df2e-45fe-b293-b16e5283ba2c to disappear May 1 15:47:22.531: INFO: Pod pod-projected-configmaps-93ff2222-df2e-45fe-b293-b16e5283ba2c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:47:22.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9616" for this suite. May 1 15:47:30.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:47:30.836: INFO: namespace projected-9616 deletion completed in 8.301231401s • [SLOW TEST:12.953 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:47:30.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions May 1 15:47:31.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 1 15:47:33.058: INFO: stderr: "" May 1 15:47:33.058: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:47:33.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4351" for this suite. May 1 15:47:39.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:47:39.939: INFO: namespace kubectl-4351 deletion completed in 6.877434462s • [SLOW TEST:9.102 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:47:39.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-2395c14a-d3ab-40fb-b4ed-232871ffb22e in namespace container-probe-5816 May 1 15:47:51.342: INFO: Started pod liveness-2395c14a-d3ab-40fb-b4ed-232871ffb22e in namespace container-probe-5816 STEP: checking the pod's current state and verifying that restartCount is present May 1 15:47:51.345: INFO: Initial restart count of pod liveness-2395c14a-d3ab-40fb-b4ed-232871ffb22e is 0 May 1 15:48:09.761: INFO: Restart count of pod container-probe-5816/liveness-2395c14a-d3ab-40fb-b4ed-232871ffb22e is now 1 (18.416737797s elapsed) May 1 15:48:30.685: INFO: Restart count of pod container-probe-5816/liveness-2395c14a-d3ab-40fb-b4ed-232871ffb22e is now 2 (39.340822695s elapsed) May 1 15:48:49.019: INFO: Restart count of pod container-probe-5816/liveness-2395c14a-d3ab-40fb-b4ed-232871ffb22e is now 3 (57.674728894s elapsed) May 1 15:49:07.242: INFO: Restart count of pod container-probe-5816/liveness-2395c14a-d3ab-40fb-b4ed-232871ffb22e is now 4 (1m15.89788685s elapsed) May 1 15:50:15.647: INFO: Restart count of pod container-probe-5816/liveness-2395c14a-d3ab-40fb-b4ed-232871ffb22e is now 5 (2m24.302415124s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:50:15.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5816" for this suite. May 1 15:50:22.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:50:22.391: INFO: namespace container-probe-5816 deletion completed in 6.613703287s • [SLOW TEST:162.451 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:50:22.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-29909523-a88d-4552-b7b7-97dbc1475013 in namespace container-probe-425 May 1 15:50:26.632: INFO: Started pod test-webserver-29909523-a88d-4552-b7b7-97dbc1475013 in namespace container-probe-425 STEP: checking the pod's current state and verifying that restartCount is present May 1 15:50:26.635: INFO: Initial restart count of pod test-webserver-29909523-a88d-4552-b7b7-97dbc1475013 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:54:27.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-425" for this suite. May 1 15:54:33.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:54:34.174: INFO: namespace container-probe-425 deletion completed in 6.294945821s • [SLOW TEST:251.782 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:54:34.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller May 1 15:54:34.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8455' May 1 15:54:53.752: INFO: stderr: "" May 1 15:54:53.752: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 15:54:53.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8455' May 1 15:54:54.008: INFO: stderr: "" May 1 15:54:54.008: INFO: stdout: "update-demo-nautilus-k98pn " STEP: Replicas for name=update-demo: expected=2 actual=1 May 1 15:54:59.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8455' May 1 15:54:59.111: INFO: stderr: "" May 1 15:54:59.111: INFO: stdout: "update-demo-nautilus-6rg7c update-demo-nautilus-k98pn " May 1 15:54:59.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6rg7c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8455' May 1 15:54:59.221: INFO: stderr: "" May 1 15:54:59.221: INFO: stdout: "" May 1 15:54:59.221: INFO: update-demo-nautilus-6rg7c is created but not running May 1 15:55:04.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8455' May 1 15:55:04.321: INFO: stderr: "" May 1 15:55:04.321: INFO: stdout: "update-demo-nautilus-6rg7c update-demo-nautilus-k98pn " May 1 15:55:04.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6rg7c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8455' May 1 15:55:04.528: INFO: stderr: "" May 1 15:55:04.528: INFO: stdout: "true" May 1 15:55:04.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6rg7c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8455' May 1 15:55:04.716: INFO: stderr: "" May 1 15:55:04.716: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 15:55:04.716: INFO: validating pod update-demo-nautilus-6rg7c May 1 15:55:04.721: INFO: got data: { "image": "nautilus.jpg" } May 1 15:55:04.721: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 15:55:04.721: INFO: update-demo-nautilus-6rg7c is verified up and running May 1 15:55:04.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k98pn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8455' May 1 15:55:04.822: INFO: stderr: "" May 1 15:55:04.822: INFO: stdout: "true" May 1 15:55:04.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k98pn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8455' May 1 15:55:04.909: INFO: stderr: "" May 1 15:55:04.909: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 15:55:04.909: INFO: validating pod update-demo-nautilus-k98pn May 1 15:55:04.913: INFO: got data: { "image": "nautilus.jpg" } May 1 15:55:04.913: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 15:55:04.913: INFO: update-demo-nautilus-k98pn is verified up and running STEP: rolling-update to new replication controller May 1 15:55:04.915: INFO: scanned /root for discovery docs: May 1 15:55:04.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8455' May 1 15:55:37.294: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 1 15:55:37.294: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 15:55:37.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8455' May 1 15:55:37.727: INFO: stderr: "" May 1 15:55:37.727: INFO: stdout: "update-demo-kitten-j55rq update-demo-kitten-jwtws " May 1 15:55:37.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-j55rq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8455' May 1 15:55:37.965: INFO: stderr: "" May 1 15:55:37.965: INFO: stdout: "true" May 1 15:55:37.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-j55rq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8455' May 1 15:55:38.056: INFO: stderr: "" May 1 15:55:38.056: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 1 15:55:38.056: INFO: validating pod update-demo-kitten-j55rq May 1 15:55:38.059: INFO: got data: { "image": "kitten.jpg" } May 1 15:55:38.059: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 1 15:55:38.059: INFO: update-demo-kitten-j55rq is verified up and running May 1 15:55:38.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jwtws -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8455' May 1 15:55:38.150: INFO: stderr: "" May 1 15:55:38.150: INFO: stdout: "true" May 1 15:55:38.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jwtws -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8455' May 1 15:55:38.248: INFO: stderr: "" May 1 15:55:38.248: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 1 15:55:38.248: INFO: validating pod update-demo-kitten-jwtws May 1 15:55:38.311: INFO: got data: { "image": "kitten.jpg" } May 1 15:55:38.311: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 1 15:55:38.311: INFO: update-demo-kitten-jwtws is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:55:38.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8455" for this suite. May 1 15:56:04.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:56:04.705: INFO: namespace kubectl-8455 deletion completed in 26.389085026s • [SLOW TEST:90.531 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:56:04.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 15:56:04.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-4508' May 1 15:56:04.890: INFO: stderr: "" May 1 15:56:04.891: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 1 15:56:14.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-4508 -o json' May 1 15:56:15.026: INFO: stderr: "" May 1 15:56:15.026: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-01T15:56:04Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-4508\",\n \"resourceVersion\": \"8460413\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-4508/pods/e2e-test-nginx-pod\",\n \"uid\": \"9ec9dfc2-5af3-44a0-8583-e33c14a017f9\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-v4kmt\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-v4kmt\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-v4kmt\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-01T15:56:04Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-01T15:56:11Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-01T15:56:11Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-01T15:56:04Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://a87dac06aea1f906f2344abca4e7ed1b41fb59c3952c57026ee431cf0026f17a\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-01T15:56:09Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.145\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-01T15:56:04Z\"\n }\n}\n" STEP: replace the image in the pod May 1 15:56:15.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4508' May 1 15:56:15.380: INFO: stderr: "" May 1 15:56:15.380: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 May 1 15:56:15.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4508' May 1 15:56:23.951: INFO: stderr: "" May 1 15:56:23.951: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:56:23.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4508" for this suite. May 1 15:56:30.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:56:30.199: INFO: namespace kubectl-4508 deletion completed in 6.138823376s • [SLOW TEST:25.494 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:56:30.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:56:34.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7016" for this suite. May 1 15:56:41.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:56:41.432: INFO: namespace emptydir-wrapper-7016 deletion completed in 6.647696872s • [SLOW TEST:11.233 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:56:41.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-b37c5f0a-f96b-489c-814f-bdf799f2bc2c STEP: Creating a pod to test consume configMaps May 1 15:56:41.578: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-83eb27e6-c7cd-4abe-98a0-df09e405447d" in namespace "projected-1829" to be "success or failure" May 1 15:56:41.610: INFO: Pod "pod-projected-configmaps-83eb27e6-c7cd-4abe-98a0-df09e405447d": Phase="Pending", Reason="", readiness=false. Elapsed: 31.79136ms May 1 15:56:43.681: INFO: Pod "pod-projected-configmaps-83eb27e6-c7cd-4abe-98a0-df09e405447d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102153769s May 1 15:56:45.685: INFO: Pod "pod-projected-configmaps-83eb27e6-c7cd-4abe-98a0-df09e405447d": Phase="Running", Reason="", readiness=true. Elapsed: 4.106472161s May 1 15:56:47.708: INFO: Pod "pod-projected-configmaps-83eb27e6-c7cd-4abe-98a0-df09e405447d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.129944279s STEP: Saw pod success May 1 15:56:47.708: INFO: Pod "pod-projected-configmaps-83eb27e6-c7cd-4abe-98a0-df09e405447d" satisfied condition "success or failure" May 1 15:56:47.712: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-83eb27e6-c7cd-4abe-98a0-df09e405447d container projected-configmap-volume-test: STEP: delete the pod May 1 15:56:48.090: INFO: Waiting for pod pod-projected-configmaps-83eb27e6-c7cd-4abe-98a0-df09e405447d to disappear May 1 15:56:48.095: INFO: Pod pod-projected-configmaps-83eb27e6-c7cd-4abe-98a0-df09e405447d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:56:48.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1829" for this suite. May 1 15:56:54.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:56:54.466: INFO: namespace projected-1829 deletion completed in 6.366762013s • [SLOW TEST:13.033 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:56:54.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-fdc4ec1b-e0b4-4045-b8f0-8b076be5ca7e in namespace container-probe-1543 May 1 15:57:02.955: INFO: Started pod liveness-fdc4ec1b-e0b4-4045-b8f0-8b076be5ca7e in namespace container-probe-1543 STEP: checking the pod's current state and verifying that restartCount is present May 1 15:57:02.957: INFO: Initial restart count of pod liveness-fdc4ec1b-e0b4-4045-b8f0-8b076be5ca7e is 0 May 1 15:57:19.941: INFO: Restart count of pod container-probe-1543/liveness-fdc4ec1b-e0b4-4045-b8f0-8b076be5ca7e is now 1 (16.984799494s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:57:20.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1543" for this suite. May 1 15:57:26.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:57:26.185: INFO: namespace container-probe-1543 deletion completed in 6.111476754s • [SLOW TEST:31.718 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:57:26.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 1 15:57:26.236: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce3b5648-c27f-4d05-85d7-14d3b70ad8fc" in namespace "downward-api-1962" to be "success or failure" May 1 15:57:26.239: INFO: Pod "downwardapi-volume-ce3b5648-c27f-4d05-85d7-14d3b70ad8fc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.063965ms May 1 15:57:28.243: INFO: Pod "downwardapi-volume-ce3b5648-c27f-4d05-85d7-14d3b70ad8fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006961125s May 1 15:57:30.247: INFO: Pod "downwardapi-volume-ce3b5648-c27f-4d05-85d7-14d3b70ad8fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010888124s STEP: Saw pod success May 1 15:57:30.247: INFO: Pod "downwardapi-volume-ce3b5648-c27f-4d05-85d7-14d3b70ad8fc" satisfied condition "success or failure" May 1 15:57:30.250: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ce3b5648-c27f-4d05-85d7-14d3b70ad8fc container client-container: STEP: delete the pod May 1 15:57:30.271: INFO: Waiting for pod downwardapi-volume-ce3b5648-c27f-4d05-85d7-14d3b70ad8fc to disappear May 1 15:57:30.275: INFO: Pod downwardapi-volume-ce3b5648-c27f-4d05-85d7-14d3b70ad8fc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:57:30.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1962" for this suite. May 1 15:57:36.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:57:36.365: INFO: namespace downward-api-1962 deletion completed in 6.086540835s • [SLOW TEST:10.179 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:57:36.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-680b0d06-3a80-4b62-9bcd-efad2b601b7d STEP: Creating a pod to test consume secrets May 1 15:57:36.487: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2ef9fa5d-4622-410d-9e7d-c958b2f7309e" in namespace "projected-5040" to be "success or failure" May 1 15:57:36.491: INFO: Pod "pod-projected-secrets-2ef9fa5d-4622-410d-9e7d-c958b2f7309e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.86448ms May 1 15:57:38.496: INFO: Pod "pod-projected-secrets-2ef9fa5d-4622-410d-9e7d-c958b2f7309e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008304399s May 1 15:57:40.519: INFO: Pod "pod-projected-secrets-2ef9fa5d-4622-410d-9e7d-c958b2f7309e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032060097s STEP: Saw pod success May 1 15:57:40.519: INFO: Pod "pod-projected-secrets-2ef9fa5d-4622-410d-9e7d-c958b2f7309e" satisfied condition "success or failure" May 1 15:57:40.522: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-2ef9fa5d-4622-410d-9e7d-c958b2f7309e container projected-secret-volume-test: STEP: delete the pod May 1 15:57:40.610: INFO: Waiting for pod pod-projected-secrets-2ef9fa5d-4622-410d-9e7d-c958b2f7309e to disappear May 1 15:57:40.617: INFO: Pod pod-projected-secrets-2ef9fa5d-4622-410d-9e7d-c958b2f7309e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:57:40.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5040" for this suite. May 1 15:57:46.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:57:46.744: INFO: namespace projected-5040 deletion completed in 6.109853881s • [SLOW TEST:10.378 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:57:46.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 1 15:57:51.420: INFO: Successfully updated pod "pod-update-952c188c-deea-4602-85ac-133c72f96199" STEP: verifying the updated pod is in kubernetes May 1 15:57:51.454: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:57:51.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7995" for this suite. May 1 15:58:17.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:58:18.048: INFO: namespace pods-7995 deletion completed in 26.389414251s • [SLOW TEST:31.304 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:58:18.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-474/configmap-test-a5edf944-9288-4863-892a-eb79f3eeaab5 STEP: Creating a pod to test consume configMaps May 1 15:58:18.159: INFO: Waiting up to 5m0s for pod "pod-configmaps-ae3dc4fe-60f6-45b3-a1a2-6a57ea4c71cc" in namespace "configmap-474" to be "success or failure" May 1 15:58:18.174: INFO: Pod "pod-configmaps-ae3dc4fe-60f6-45b3-a1a2-6a57ea4c71cc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.321718ms May 1 15:58:20.189: INFO: Pod "pod-configmaps-ae3dc4fe-60f6-45b3-a1a2-6a57ea4c71cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029951316s May 1 15:58:22.211: INFO: Pod "pod-configmaps-ae3dc4fe-60f6-45b3-a1a2-6a57ea4c71cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052326936s STEP: Saw pod success May 1 15:58:22.212: INFO: Pod "pod-configmaps-ae3dc4fe-60f6-45b3-a1a2-6a57ea4c71cc" satisfied condition "success or failure" May 1 15:58:22.215: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-ae3dc4fe-60f6-45b3-a1a2-6a57ea4c71cc container env-test: STEP: delete the pod May 1 15:58:22.236: INFO: Waiting for pod pod-configmaps-ae3dc4fe-60f6-45b3-a1a2-6a57ea4c71cc to disappear May 1 15:58:22.274: INFO: Pod pod-configmaps-ae3dc4fe-60f6-45b3-a1a2-6a57ea4c71cc no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:58:22.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-474" for this suite. May 1 15:58:28.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:58:28.378: INFO: namespace configmap-474 deletion completed in 6.099242984s • [SLOW TEST:10.329 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:58:28.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 1 15:58:28.931: INFO: Waiting up to 5m0s for pod "downwardapi-volume-108c7a66-0044-4975-b5d3-b01e1c8dd623" in namespace "projected-3941" to be "success or failure" May 1 15:58:28.948: INFO: Pod "downwardapi-volume-108c7a66-0044-4975-b5d3-b01e1c8dd623": Phase="Pending", Reason="", readiness=false. Elapsed: 16.520658ms May 1 15:58:30.952: INFO: Pod "downwardapi-volume-108c7a66-0044-4975-b5d3-b01e1c8dd623": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021030163s May 1 15:58:32.955: INFO: Pod "downwardapi-volume-108c7a66-0044-4975-b5d3-b01e1c8dd623": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024201835s STEP: Saw pod success May 1 15:58:32.955: INFO: Pod "downwardapi-volume-108c7a66-0044-4975-b5d3-b01e1c8dd623" satisfied condition "success or failure" May 1 15:58:32.957: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-108c7a66-0044-4975-b5d3-b01e1c8dd623 container client-container: STEP: delete the pod May 1 15:58:32.991: INFO: Waiting for pod downwardapi-volume-108c7a66-0044-4975-b5d3-b01e1c8dd623 to disappear May 1 15:58:33.032: INFO: Pod downwardapi-volume-108c7a66-0044-4975-b5d3-b01e1c8dd623 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:58:33.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3941" for this suite. May 1 15:58:39.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:58:39.124: INFO: namespace projected-3941 deletion completed in 6.088234894s • [SLOW TEST:10.746 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:58:39.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 1 15:58:39.265: INFO: Waiting up to 5m0s for pod "pod-06e5bee0-748d-4066-baa3-4104f3bcfcfd" in namespace "emptydir-8159" to be "success or failure" May 1 15:58:39.307: INFO: Pod "pod-06e5bee0-748d-4066-baa3-4104f3bcfcfd": Phase="Pending", Reason="", readiness=false. Elapsed: 42.220934ms May 1 15:58:41.311: INFO: Pod "pod-06e5bee0-748d-4066-baa3-4104f3bcfcfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046539686s May 1 15:58:43.315: INFO: Pod "pod-06e5bee0-748d-4066-baa3-4104f3bcfcfd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050645984s May 1 15:58:45.319: INFO: Pod "pod-06e5bee0-748d-4066-baa3-4104f3bcfcfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053952628s STEP: Saw pod success May 1 15:58:45.319: INFO: Pod "pod-06e5bee0-748d-4066-baa3-4104f3bcfcfd" satisfied condition "success or failure" May 1 15:58:45.336: INFO: Trying to get logs from node iruya-worker pod pod-06e5bee0-748d-4066-baa3-4104f3bcfcfd container test-container: STEP: delete the pod May 1 15:58:45.378: INFO: Waiting for pod pod-06e5bee0-748d-4066-baa3-4104f3bcfcfd to disappear May 1 15:58:45.385: INFO: Pod pod-06e5bee0-748d-4066-baa3-4104f3bcfcfd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:58:45.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8159" for this suite. May 1 15:58:51.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:58:51.492: INFO: namespace emptydir-8159 deletion completed in 6.102054809s • [SLOW TEST:12.368 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:58:51.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-1dccce2a-2de8-46bc-a526-44627e63b122 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 15:58:51.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3038" for this suite. May 1 15:58:57.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 15:58:57.779: INFO: namespace secrets-3038 deletion completed in 6.159665151s • [SLOW TEST:6.287 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 15:58:57.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9109 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 1 15:58:58.079: INFO: Found 0 stateful pods, waiting for 3 May 1 15:59:08.344: INFO: Found 2 stateful pods, waiting for 3 May 1 15:59:18.085: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 15:59:18.085: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 15:59:18.085: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 1 15:59:18.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 15:59:19.512: INFO: stderr: "I0501 15:59:19.199323 1493 log.go:172] (0xc0008d2580) (0xc0008728c0) Create stream\nI0501 15:59:19.199360 1493 log.go:172] (0xc0008d2580) (0xc0008728c0) Stream added, broadcasting: 1\nI0501 15:59:19.203006 1493 log.go:172] (0xc0008d2580) Reply frame received for 1\nI0501 15:59:19.203047 1493 log.go:172] (0xc0008d2580) (0xc0009bc000) Create stream\nI0501 15:59:19.203066 1493 log.go:172] (0xc0008d2580) (0xc0009bc000) Stream added, broadcasting: 3\nI0501 15:59:19.204537 1493 log.go:172] (0xc0008d2580) Reply frame received for 3\nI0501 15:59:19.204566 1493 log.go:172] (0xc0008d2580) (0xc000872960) Create stream\nI0501 15:59:19.204575 1493 log.go:172] (0xc0008d2580) (0xc000872960) Stream added, broadcasting: 5\nI0501 15:59:19.208578 1493 log.go:172] (0xc0008d2580) Reply frame received for 5\nI0501 15:59:19.269054 1493 log.go:172] (0xc0008d2580) Data frame received for 5\nI0501 15:59:19.269081 1493 log.go:172] (0xc000872960) (5) Data frame handling\nI0501 15:59:19.269092 1493 log.go:172] (0xc000872960) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0501 15:59:19.500539 1493 log.go:172] (0xc0008d2580) Data frame received for 5\nI0501 15:59:19.500572 1493 log.go:172] (0xc000872960) (5) Data frame handling\nI0501 15:59:19.500592 1493 log.go:172] (0xc0008d2580) Data frame received for 3\nI0501 15:59:19.500602 1493 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0501 15:59:19.500615 1493 log.go:172] (0xc0009bc000) (3) Data frame sent\nI0501 15:59:19.500624 1493 log.go:172] (0xc0008d2580) Data frame received for 3\nI0501 15:59:19.500630 1493 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0501 15:59:19.507597 1493 log.go:172] (0xc0008d2580) Data frame received for 1\nI0501 15:59:19.507630 1493 log.go:172] (0xc0008728c0) (1) Data frame handling\nI0501 15:59:19.507642 1493 log.go:172] (0xc0008728c0) (1) Data frame sent\nI0501 15:59:19.507654 1493 log.go:172] (0xc0008d2580) (0xc0008728c0) Stream removed, broadcasting: 1\nI0501 15:59:19.507691 1493 log.go:172] (0xc0008d2580) Go away received\nI0501 15:59:19.507941 1493 log.go:172] (0xc0008d2580) (0xc0008728c0) Stream removed, broadcasting: 1\nI0501 15:59:19.507995 1493 log.go:172] (0xc0008d2580) (0xc0009bc000) Stream removed, broadcasting: 3\nI0501 15:59:19.508010 1493 log.go:172] (0xc0008d2580) (0xc000872960) Stream removed, broadcasting: 5\n" May 1 15:59:19.512: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 15:59:19.512: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 1 15:59:29.624: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 1 15:59:39.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 15:59:40.215: INFO: stderr: "I0501 15:59:40.122609 1516 log.go:172] (0xc0006088f0) (0xc00010a6e0) Create stream\nI0501 15:59:40.122652 1516 log.go:172] (0xc0006088f0) (0xc00010a6e0) Stream added, broadcasting: 1\nI0501 15:59:40.124822 1516 log.go:172] (0xc0006088f0) Reply frame received for 1\nI0501 15:59:40.124885 1516 log.go:172] (0xc0006088f0) (0xc000550000) Create stream\nI0501 15:59:40.124913 1516 log.go:172] (0xc0006088f0) (0xc000550000) Stream added, broadcasting: 3\nI0501 15:59:40.126102 1516 log.go:172] (0xc0006088f0) Reply frame received for 3\nI0501 15:59:40.126151 1516 log.go:172] (0xc0006088f0) (0xc000348000) Create stream\nI0501 15:59:40.126167 1516 log.go:172] (0xc0006088f0) (0xc000348000) Stream added, broadcasting: 5\nI0501 15:59:40.127194 1516 log.go:172] (0xc0006088f0) Reply frame received for 5\nI0501 15:59:40.191758 1516 log.go:172] (0xc0006088f0) Data frame received for 5\nI0501 15:59:40.191786 1516 log.go:172] (0xc000348000) (5) Data frame handling\nI0501 15:59:40.191809 1516 log.go:172] (0xc000348000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0501 15:59:40.210101 1516 log.go:172] (0xc0006088f0) Data frame received for 5\nI0501 15:59:40.210126 1516 log.go:172] (0xc000348000) (5) Data frame handling\nI0501 15:59:40.210156 1516 log.go:172] (0xc0006088f0) Data frame received for 3\nI0501 15:59:40.210167 1516 log.go:172] (0xc000550000) (3) Data frame handling\nI0501 15:59:40.210176 1516 log.go:172] (0xc000550000) (3) Data frame sent\nI0501 15:59:40.210185 1516 log.go:172] (0xc0006088f0) Data frame received for 3\nI0501 15:59:40.210192 1516 log.go:172] (0xc000550000) (3) Data frame handling\nI0501 15:59:40.211550 1516 log.go:172] (0xc0006088f0) Data frame received for 1\nI0501 15:59:40.211563 1516 log.go:172] (0xc00010a6e0) (1) Data frame handling\nI0501 15:59:40.211577 1516 log.go:172] (0xc00010a6e0) (1) Data frame sent\nI0501 15:59:40.211765 1516 log.go:172] (0xc0006088f0) (0xc00010a6e0) Stream removed, broadcasting: 1\nI0501 15:59:40.211840 1516 log.go:172] (0xc0006088f0) Go away received\nI0501 15:59:40.212039 1516 log.go:172] (0xc0006088f0) (0xc00010a6e0) Stream removed, broadcasting: 1\nI0501 15:59:40.212054 1516 log.go:172] (0xc0006088f0) (0xc000550000) Stream removed, broadcasting: 3\nI0501 15:59:40.212059 1516 log.go:172] (0xc0006088f0) (0xc000348000) Stream removed, broadcasting: 5\n" May 1 15:59:40.215: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 15:59:40.215: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 15:59:50.304: INFO: Waiting for StatefulSet statefulset-9109/ss2 to complete update May 1 15:59:50.304: INFO: Waiting for Pod statefulset-9109/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 1 15:59:50.304: INFO: Waiting for Pod statefulset-9109/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 1 16:00:00.312: INFO: Waiting for StatefulSet statefulset-9109/ss2 to complete update May 1 16:00:00.312: INFO: Waiting for Pod statefulset-9109/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 1 16:00:00.312: INFO: Waiting for Pod statefulset-9109/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 1 16:00:10.311: INFO: Waiting for StatefulSet statefulset-9109/ss2 to complete update May 1 16:00:10.312: INFO: Waiting for Pod statefulset-9109/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 1 16:00:20.312: INFO: Waiting for StatefulSet statefulset-9109/ss2 to complete update May 1 16:00:20.312: INFO: Waiting for Pod statefulset-9109/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision May 1 16:00:30.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 16:00:30.536: INFO: stderr: "I0501 16:00:30.436380 1533 log.go:172] (0xc000a60630) (0xc0005dab40) Create stream\nI0501 16:00:30.436428 1533 log.go:172] (0xc000a60630) (0xc0005dab40) Stream added, broadcasting: 1\nI0501 16:00:30.438399 1533 log.go:172] (0xc000a60630) Reply frame received for 1\nI0501 16:00:30.438448 1533 log.go:172] (0xc000a60630) (0xc0005e2000) Create stream\nI0501 16:00:30.438459 1533 log.go:172] (0xc000a60630) (0xc0005e2000) Stream added, broadcasting: 3\nI0501 16:00:30.439583 1533 log.go:172] (0xc000a60630) Reply frame received for 3\nI0501 16:00:30.439634 1533 log.go:172] (0xc000a60630) (0xc0005da3c0) Create stream\nI0501 16:00:30.439651 1533 log.go:172] (0xc000a60630) (0xc0005da3c0) Stream added, broadcasting: 5\nI0501 16:00:30.440454 1533 log.go:172] (0xc000a60630) Reply frame received for 5\nI0501 16:00:30.499281 1533 log.go:172] (0xc000a60630) Data frame received for 5\nI0501 16:00:30.499318 1533 log.go:172] (0xc0005da3c0) (5) Data frame handling\nI0501 16:00:30.499346 1533 log.go:172] (0xc0005da3c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0501 16:00:30.530024 1533 log.go:172] (0xc000a60630) Data frame received for 5\nI0501 16:00:30.530048 1533 log.go:172] (0xc0005da3c0) (5) Data frame handling\nI0501 16:00:30.530066 1533 log.go:172] (0xc000a60630) Data frame received for 3\nI0501 16:00:30.530070 1533 log.go:172] (0xc0005e2000) (3) Data frame handling\nI0501 16:00:30.530076 1533 log.go:172] (0xc0005e2000) (3) Data frame sent\nI0501 16:00:30.530188 1533 log.go:172] (0xc000a60630) Data frame received for 3\nI0501 16:00:30.530208 1533 log.go:172] (0xc0005e2000) (3) Data frame handling\nI0501 16:00:30.531868 1533 log.go:172] (0xc000a60630) Data frame received for 1\nI0501 16:00:30.531883 1533 log.go:172] (0xc0005dab40) (1) Data frame handling\nI0501 16:00:30.531891 1533 log.go:172] (0xc0005dab40) (1) Data frame sent\nI0501 16:00:30.531898 1533 log.go:172] (0xc000a60630) (0xc0005dab40) Stream removed, broadcasting: 1\nI0501 16:00:30.532017 1533 log.go:172] (0xc000a60630) Go away received\nI0501 16:00:30.532127 1533 log.go:172] (0xc000a60630) (0xc0005dab40) Stream removed, broadcasting: 1\nI0501 16:00:30.532143 1533 log.go:172] (0xc000a60630) (0xc0005e2000) Stream removed, broadcasting: 3\nI0501 16:00:30.532152 1533 log.go:172] (0xc000a60630) (0xc0005da3c0) Stream removed, broadcasting: 5\n" May 1 16:00:30.536: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 16:00:30.536: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 16:00:40.568: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 1 16:00:50.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9109 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 16:00:50.897: INFO: stderr: "I0501 16:00:50.828361 1554 log.go:172] (0xc00013adc0) (0xc000348820) Create stream\nI0501 16:00:50.828434 1554 log.go:172] (0xc00013adc0) (0xc000348820) Stream added, broadcasting: 1\nI0501 16:00:50.833782 1554 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0501 16:00:50.833867 1554 log.go:172] (0xc00013adc0) (0xc0003488c0) Create stream\nI0501 16:00:50.833887 1554 log.go:172] (0xc00013adc0) (0xc0003488c0) Stream added, broadcasting: 3\nI0501 16:00:50.834908 1554 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0501 16:00:50.834940 1554 log.go:172] (0xc00013adc0) (0xc0008dc000) Create stream\nI0501 16:00:50.834953 1554 log.go:172] (0xc00013adc0) (0xc0008dc000) Stream added, broadcasting: 5\nI0501 16:00:50.835832 1554 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0501 16:00:50.888503 1554 log.go:172] (0xc00013adc0) Data frame received for 3\nI0501 16:00:50.888541 1554 log.go:172] (0xc0003488c0) (3) Data frame handling\nI0501 16:00:50.888563 1554 log.go:172] (0xc0003488c0) (3) Data frame sent\nI0501 16:00:50.888709 1554 log.go:172] (0xc00013adc0) Data frame received for 5\nI0501 16:00:50.888746 1554 log.go:172] (0xc00013adc0) Data frame received for 3\nI0501 16:00:50.888774 1554 log.go:172] (0xc0003488c0) (3) Data frame handling\nI0501 16:00:50.888795 1554 log.go:172] (0xc0008dc000) (5) Data frame handling\nI0501 16:00:50.888831 1554 log.go:172] (0xc0008dc000) (5) Data frame sent\nI0501 16:00:50.888853 1554 log.go:172] (0xc00013adc0) Data frame received for 5\nI0501 16:00:50.888869 1554 log.go:172] (0xc0008dc000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0501 16:00:50.890751 1554 log.go:172] (0xc00013adc0) Data frame received for 1\nI0501 16:00:50.890774 1554 log.go:172] (0xc000348820) (1) Data frame handling\nI0501 16:00:50.890788 1554 log.go:172] (0xc000348820) (1) Data frame sent\nI0501 16:00:50.890803 1554 log.go:172] (0xc00013adc0) (0xc000348820) Stream removed, broadcasting: 1\nI0501 16:00:50.890868 1554 log.go:172] (0xc00013adc0) Go away received\nI0501 16:00:50.892304 1554 log.go:172] (0xc00013adc0) (0xc000348820) Stream removed, broadcasting: 1\nI0501 16:00:50.892335 1554 log.go:172] (0xc00013adc0) (0xc0003488c0) Stream removed, broadcasting: 3\nI0501 16:00:50.892387 1554 log.go:172] (0xc00013adc0) (0xc0008dc000) Stream removed, broadcasting: 5\n" May 1 16:00:50.897: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 16:00:50.897: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 16:01:20.915: INFO: Waiting for StatefulSet statefulset-9109/ss2 to complete update May 1 16:01:20.915: INFO: Waiting for Pod statefulset-9109/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 1 16:01:30.923: INFO: Waiting for StatefulSet statefulset-9109/ss2 to complete update May 1 16:01:30.923: INFO: Waiting for Pod statefulset-9109/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 1 16:01:40.923: INFO: Deleting all statefulset in ns statefulset-9109 May 1 16:01:40.925: INFO: Scaling statefulset ss2 to 0 May 1 16:02:20.943: INFO: Waiting for statefulset status.replicas updated to 0 May 1 16:02:20.945: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:02:21.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9109" for this suite. May 1 16:02:29.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:02:29.360: INFO: namespace statefulset-9109 deletion completed in 8.156448219s • [SLOW TEST:211.580 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:02:29.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 1 16:02:30.209: INFO: Pod name pod-release: Found 0 pods out of 1 May 1 16:02:35.260: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:02:36.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8830" for this suite. May 1 16:02:43.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:02:43.599: INFO: namespace replication-controller-8830 deletion completed in 6.923518794s • [SLOW TEST:14.238 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:02:43.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 16:02:43.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 1 16:02:44.029: INFO: stderr: "" May 1 16:02:44.029: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T15:22:12Z\", GoVersion:\"go1.12.7\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:02:44.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9463" for this suite. May 1 16:02:50.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:02:50.117: INFO: namespace kubectl-9463 deletion completed in 6.083439157s • [SLOW TEST:6.518 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:02:50.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-7236 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7236 STEP: Deleting pre-stop pod May 1 16:03:04.037: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:03:04.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7236" for this suite. May 1 16:03:44.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:03:44.663: INFO: namespace prestop-7236 deletion completed in 40.586562081s • [SLOW TEST:54.546 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:03:44.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 16:03:44.867: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 1 16:03:44.876: INFO: Number of nodes with available pods: 0 May 1 16:03:44.876: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 1 16:03:44.911: INFO: Number of nodes with available pods: 0 May 1 16:03:44.911: INFO: Node iruya-worker is running more than one daemon pod May 1 16:03:45.914: INFO: Number of nodes with available pods: 0 May 1 16:03:45.914: INFO: Node iruya-worker is running more than one daemon pod May 1 16:03:46.915: INFO: Number of nodes with available pods: 0 May 1 16:03:46.915: INFO: Node iruya-worker is running more than one daemon pod May 1 16:03:47.915: INFO: Number of nodes with available pods: 0 May 1 16:03:47.915: INFO: Node iruya-worker is running more than one daemon pod May 1 16:03:49.147: INFO: Number of nodes with available pods: 1 May 1 16:03:49.147: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 1 16:03:49.615: INFO: Number of nodes with available pods: 1 May 1 16:03:49.615: INFO: Number of running nodes: 0, number of available pods: 1 May 1 16:03:50.619: INFO: Number of nodes with available pods: 0 May 1 16:03:50.619: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 1 16:03:50.635: INFO: Number of nodes with available pods: 0 May 1 16:03:50.635: INFO: Node iruya-worker is running more than one daemon pod May 1 16:03:51.675: INFO: Number of nodes with available pods: 0 May 1 16:03:51.675: INFO: Node iruya-worker is running more than one daemon pod May 1 16:03:52.640: INFO: Number of nodes with available pods: 0 May 1 16:03:52.640: INFO: Node iruya-worker is running more than one daemon pod May 1 16:03:53.639: INFO: Number of nodes with available pods: 0 May 1 16:03:53.640: INFO: Node iruya-worker is running more than one daemon pod May 1 16:03:54.639: INFO: Number of nodes with available pods: 0 May 1 16:03:54.639: INFO: Node iruya-worker is running more than one daemon pod May 1 16:03:55.641: INFO: Number of nodes with available pods: 0 May 1 16:03:55.641: INFO: Node iruya-worker is running more than one daemon pod May 1 16:03:56.666: INFO: Number of nodes with available pods: 0 May 1 16:03:56.666: INFO: Node iruya-worker is running more than one daemon pod May 1 16:03:57.639: INFO: Number of nodes with available pods: 0 May 1 16:03:57.639: INFO: Node iruya-worker is running more than one daemon pod May 1 16:03:58.640: INFO: Number of nodes with available pods: 0 May 1 16:03:58.640: INFO: Node iruya-worker is running more than one daemon pod May 1 16:03:59.640: INFO: Number of nodes with available pods: 0 May 1 16:03:59.640: INFO: Node iruya-worker is running more than one daemon pod May 1 16:04:00.639: INFO: Number of nodes with available pods: 0 May 1 16:04:00.639: INFO: Node iruya-worker is running more than one daemon pod May 1 16:04:01.729: INFO: Number of nodes with available pods: 0 May 1 16:04:01.729: INFO: Node iruya-worker is running more than one daemon pod May 1 16:04:02.640: INFO: Number of nodes with available pods: 0 May 1 16:04:02.640: INFO: Node iruya-worker is running more than one daemon pod May 1 16:04:03.659: INFO: Number of nodes with available pods: 0 May 1 16:04:03.659: INFO: Node iruya-worker is running more than one daemon pod May 1 16:04:04.639: INFO: Number of nodes with available pods: 0 May 1 16:04:04.639: INFO: Node iruya-worker is running more than one daemon pod May 1 16:04:05.807: INFO: Number of nodes with available pods: 0 May 1 16:04:05.807: INFO: Node iruya-worker is running more than one daemon pod May 1 16:04:06.836: INFO: Number of nodes with available pods: 1 May 1 16:04:06.836: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6907, will wait for the garbage collector to delete the pods May 1 16:04:06.899: INFO: Deleting DaemonSet.extensions daemon-set took: 6.536019ms May 1 16:04:07.199: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.378692ms May 1 16:04:22.203: INFO: Number of nodes with available pods: 0 May 1 16:04:22.203: INFO: Number of running nodes: 0, number of available pods: 0 May 1 16:04:22.206: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6907/daemonsets","resourceVersion":"8462112"},"items":null} May 1 16:04:22.209: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6907/pods","resourceVersion":"8462112"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:04:22.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6907" for this suite. May 1 16:04:30.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:04:30.482: INFO: namespace daemonsets-6907 deletion completed in 8.185512994s • [SLOW TEST:45.819 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:04:30.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy May 1 16:04:30.634: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix136336807/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:04:30.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-959" for this suite. May 1 16:04:36.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:04:36.811: INFO: namespace kubectl-959 deletion completed in 6.105983315s • [SLOW TEST:6.329 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:04:36.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 1 16:04:36.923: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 1 16:04:36.938: INFO: Waiting for terminating namespaces to be deleted... May 1 16:04:36.940: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 1 16:04:36.947: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 1 16:04:36.947: INFO: Container kube-proxy ready: true, restart count 0 May 1 16:04:36.947: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 1 16:04:36.947: INFO: Container kindnet-cni ready: true, restart count 0 May 1 16:04:36.947: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 1 16:04:36.955: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 1 16:04:36.955: INFO: Container coredns ready: true, restart count 0 May 1 16:04:36.955: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 1 16:04:36.955: INFO: Container coredns ready: true, restart count 0 May 1 16:04:36.955: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 1 16:04:36.955: INFO: Container kube-proxy ready: true, restart count 0 May 1 16:04:36.955: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 1 16:04:36.955: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160af311f0292832], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:04:37.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8976" for this suite. May 1 16:04:44.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:04:44.417: INFO: namespace sched-pred-8976 deletion completed in 6.438978451s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.606 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:04:44.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-4d363ccc-e956-4772-a07e-7cf87b6ac69a STEP: Creating a pod to test consume secrets May 1 16:04:44.534: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9035bb3f-232f-473b-a3ee-a9f3d330856f" in namespace "projected-8871" to be "success or failure" May 1 16:04:44.547: INFO: Pod "pod-projected-secrets-9035bb3f-232f-473b-a3ee-a9f3d330856f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.320441ms May 1 16:04:46.550: INFO: Pod "pod-projected-secrets-9035bb3f-232f-473b-a3ee-a9f3d330856f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01610801s May 1 16:04:48.555: INFO: Pod "pod-projected-secrets-9035bb3f-232f-473b-a3ee-a9f3d330856f": Phase="Running", Reason="", readiness=true. Elapsed: 4.020653642s May 1 16:04:50.559: INFO: Pod "pod-projected-secrets-9035bb3f-232f-473b-a3ee-a9f3d330856f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0245692s STEP: Saw pod success May 1 16:04:50.559: INFO: Pod "pod-projected-secrets-9035bb3f-232f-473b-a3ee-a9f3d330856f" satisfied condition "success or failure" May 1 16:04:50.562: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-9035bb3f-232f-473b-a3ee-a9f3d330856f container projected-secret-volume-test: STEP: delete the pod May 1 16:04:50.585: INFO: Waiting for pod pod-projected-secrets-9035bb3f-232f-473b-a3ee-a9f3d330856f to disappear May 1 16:04:50.588: INFO: Pod pod-projected-secrets-9035bb3f-232f-473b-a3ee-a9f3d330856f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:04:50.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8871" for this suite. May 1 16:04:58.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:04:58.816: INFO: namespace projected-8871 deletion completed in 8.225378927s • [SLOW TEST:14.398 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:04:58.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 1 16:04:59.122: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3541372e-3a6e-406a-9c30-969f0ad76b16" in namespace "downward-api-7732" to be "success or failure" May 1 16:04:59.218: INFO: Pod "downwardapi-volume-3541372e-3a6e-406a-9c30-969f0ad76b16": Phase="Pending", Reason="", readiness=false. Elapsed: 96.238358ms May 1 16:05:01.319: INFO: Pod "downwardapi-volume-3541372e-3a6e-406a-9c30-969f0ad76b16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197276317s May 1 16:05:03.323: INFO: Pod "downwardapi-volume-3541372e-3a6e-406a-9c30-969f0ad76b16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.201572335s STEP: Saw pod success May 1 16:05:03.323: INFO: Pod "downwardapi-volume-3541372e-3a6e-406a-9c30-969f0ad76b16" satisfied condition "success or failure" May 1 16:05:03.326: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-3541372e-3a6e-406a-9c30-969f0ad76b16 container client-container: STEP: delete the pod May 1 16:05:03.779: INFO: Waiting for pod downwardapi-volume-3541372e-3a6e-406a-9c30-969f0ad76b16 to disappear May 1 16:05:03.782: INFO: Pod downwardapi-volume-3541372e-3a6e-406a-9c30-969f0ad76b16 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:05:03.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7732" for this suite. May 1 16:05:09.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:05:09.962: INFO: namespace downward-api-7732 deletion completed in 6.176010927s • [SLOW TEST:11.146 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:05:09.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-ce41a486-3294-44a4-a98e-ab84728e21b0 STEP: Creating secret with name s-test-opt-upd-c883af88-1efc-46f5-961e-ee1da6c0646e STEP: Creating the pod STEP: Deleting secret s-test-opt-del-ce41a486-3294-44a4-a98e-ab84728e21b0 STEP: Updating secret s-test-opt-upd-c883af88-1efc-46f5-961e-ee1da6c0646e STEP: Creating secret with name s-test-opt-create-620e0442-4e8b-4682-8d6f-1069ee1b41f4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:05:22.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5350" for this suite. May 1 16:05:46.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:05:46.619: INFO: namespace secrets-5350 deletion completed in 24.091106229s • [SLOW TEST:36.658 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:05:46.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 1 16:05:54.363: INFO: 10 pods remaining May 1 16:05:54.363: INFO: 0 pods has nil DeletionTimestamp May 1 16:05:54.363: INFO: May 1 16:05:54.901: INFO: 0 pods remaining May 1 16:05:54.902: INFO: 0 pods has nil DeletionTimestamp May 1 16:05:54.902: INFO: May 1 16:05:57.148: INFO: 0 pods remaining May 1 16:05:57.148: INFO: 0 pods has nil DeletionTimestamp May 1 16:05:57.148: INFO: May 1 16:05:58.063: INFO: 0 pods remaining May 1 16:05:58.063: INFO: 0 pods has nil DeletionTimestamp May 1 16:05:58.063: INFO: STEP: Gathering metrics W0501 16:05:59.345905 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 16:05:59.346: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:05:59.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1578" for this suite. May 1 16:06:08.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:06:08.310: INFO: namespace gc-1578 deletion completed in 8.647115184s • [SLOW TEST:21.690 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:06:08.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 1 16:06:14.590: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:06:14.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4355" for this suite. May 1 16:06:22.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:06:22.789: INFO: namespace container-runtime-4355 deletion completed in 8.119370226s • [SLOW TEST:14.479 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:06:22.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-1ab74813-b5d7-4e8b-9bf3-23be2ee01198 STEP: Creating a pod to test consume secrets May 1 16:06:22.982: INFO: Waiting up to 5m0s for pod "pod-secrets-14bb5b5f-635b-4864-85ca-6484c03029b7" in namespace "secrets-6067" to be "success or failure" May 1 16:06:22.986: INFO: Pod "pod-secrets-14bb5b5f-635b-4864-85ca-6484c03029b7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.39113ms May 1 16:06:25.030: INFO: Pod "pod-secrets-14bb5b5f-635b-4864-85ca-6484c03029b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048120431s May 1 16:06:27.034: INFO: Pod "pod-secrets-14bb5b5f-635b-4864-85ca-6484c03029b7": Phase="Running", Reason="", readiness=true. Elapsed: 4.051967758s May 1 16:06:29.047: INFO: Pod "pod-secrets-14bb5b5f-635b-4864-85ca-6484c03029b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065168956s STEP: Saw pod success May 1 16:06:29.047: INFO: Pod "pod-secrets-14bb5b5f-635b-4864-85ca-6484c03029b7" satisfied condition "success or failure" May 1 16:06:29.049: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-14bb5b5f-635b-4864-85ca-6484c03029b7 container secret-volume-test: STEP: delete the pod May 1 16:06:29.080: INFO: Waiting for pod pod-secrets-14bb5b5f-635b-4864-85ca-6484c03029b7 to disappear May 1 16:06:29.099: INFO: Pod pod-secrets-14bb5b5f-635b-4864-85ca-6484c03029b7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:06:29.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6067" for this suite. May 1 16:06:35.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:06:35.189: INFO: namespace secrets-6067 deletion completed in 6.087466414s • [SLOW TEST:12.400 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:06:35.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 16:07:01.249: INFO: Container started at 2020-05-01 16:06:38 +0000 UTC, pod became ready at 2020-05-01 16:07:01 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:07:01.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1964" for this suite. May 1 16:07:23.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:07:23.451: INFO: namespace container-probe-1964 deletion completed in 22.198709639s • [SLOW TEST:48.261 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:07:23.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 1 16:07:28.063: INFO: Successfully updated pod "annotationupdate55884c09-6dda-4986-85d5-3274aba3126b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:07:30.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5058" for this suite. May 1 16:07:54.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:07:54.271: INFO: namespace downward-api-5058 deletion completed in 24.122543997s • [SLOW TEST:30.820 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:07:54.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod May 1 16:08:00.681: INFO: Pod pod-hostip-e7761188-5052-4657-87f2-3af471051eb2 has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:08:00.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-842" for this suite. May 1 16:08:22.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:08:22.802: INFO: namespace pods-842 deletion completed in 22.117184463s • [SLOW TEST:28.531 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:08:22.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 1 16:08:22.966: INFO: Waiting up to 5m0s for pod "pod-db3bedf2-3ec9-474f-8a56-200f89354117" in namespace "emptydir-9817" to be "success or failure" May 1 16:08:23.020: INFO: Pod "pod-db3bedf2-3ec9-474f-8a56-200f89354117": Phase="Pending", Reason="", readiness=false. Elapsed: 54.212107ms May 1 16:08:25.212: INFO: Pod "pod-db3bedf2-3ec9-474f-8a56-200f89354117": Phase="Pending", Reason="", readiness=false. Elapsed: 2.245973782s May 1 16:08:27.216: INFO: Pod "pod-db3bedf2-3ec9-474f-8a56-200f89354117": Phase="Running", Reason="", readiness=true. Elapsed: 4.249888101s May 1 16:08:29.220: INFO: Pod "pod-db3bedf2-3ec9-474f-8a56-200f89354117": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.253935868s STEP: Saw pod success May 1 16:08:29.220: INFO: Pod "pod-db3bedf2-3ec9-474f-8a56-200f89354117" satisfied condition "success or failure" May 1 16:08:29.222: INFO: Trying to get logs from node iruya-worker pod pod-db3bedf2-3ec9-474f-8a56-200f89354117 container test-container: STEP: delete the pod May 1 16:08:29.502: INFO: Waiting for pod pod-db3bedf2-3ec9-474f-8a56-200f89354117 to disappear May 1 16:08:29.532: INFO: Pod pod-db3bedf2-3ec9-474f-8a56-200f89354117 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:08:29.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9817" for this suite. May 1 16:08:35.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:08:35.671: INFO: namespace emptydir-9817 deletion completed in 6.136187302s • [SLOW TEST:12.868 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:08:35.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-996ff129-4dbe-425c-891f-dc280a486f3f STEP: Creating a pod to test consume secrets May 1 16:08:36.004: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-399c341b-9302-4299-86d4-c180a47404bb" in namespace "projected-1875" to be "success or failure" May 1 16:08:36.035: INFO: Pod "pod-projected-secrets-399c341b-9302-4299-86d4-c180a47404bb": Phase="Pending", Reason="", readiness=false. Elapsed: 31.624326ms May 1 16:08:38.039: INFO: Pod "pod-projected-secrets-399c341b-9302-4299-86d4-c180a47404bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035739872s May 1 16:08:40.350: INFO: Pod "pod-projected-secrets-399c341b-9302-4299-86d4-c180a47404bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346093055s May 1 16:08:42.673: INFO: Pod "pod-projected-secrets-399c341b-9302-4299-86d4-c180a47404bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.669657109s STEP: Saw pod success May 1 16:08:42.673: INFO: Pod "pod-projected-secrets-399c341b-9302-4299-86d4-c180a47404bb" satisfied condition "success or failure" May 1 16:08:42.676: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-399c341b-9302-4299-86d4-c180a47404bb container projected-secret-volume-test: STEP: delete the pod May 1 16:08:42.744: INFO: Waiting for pod pod-projected-secrets-399c341b-9302-4299-86d4-c180a47404bb to disappear May 1 16:08:42.751: INFO: Pod pod-projected-secrets-399c341b-9302-4299-86d4-c180a47404bb no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:08:42.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1875" for this suite. May 1 16:08:48.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:08:48.839: INFO: namespace projected-1875 deletion completed in 6.085272177s • [SLOW TEST:13.168 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:08:48.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 1 16:08:48.932: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12c9a8c2-e342-4e46-9ba8-9d13444b4b45" in namespace "projected-7057" to be "success or failure" May 1 16:08:48.937: INFO: Pod "downwardapi-volume-12c9a8c2-e342-4e46-9ba8-9d13444b4b45": Phase="Pending", Reason="", readiness=false. Elapsed: 5.598009ms May 1 16:08:50.942: INFO: Pod "downwardapi-volume-12c9a8c2-e342-4e46-9ba8-9d13444b4b45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009886062s May 1 16:08:52.946: INFO: Pod "downwardapi-volume-12c9a8c2-e342-4e46-9ba8-9d13444b4b45": Phase="Running", Reason="", readiness=true. Elapsed: 4.013968071s May 1 16:08:54.949: INFO: Pod "downwardapi-volume-12c9a8c2-e342-4e46-9ba8-9d13444b4b45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017474394s STEP: Saw pod success May 1 16:08:54.949: INFO: Pod "downwardapi-volume-12c9a8c2-e342-4e46-9ba8-9d13444b4b45" satisfied condition "success or failure" May 1 16:08:54.951: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-12c9a8c2-e342-4e46-9ba8-9d13444b4b45 container client-container: STEP: delete the pod May 1 16:08:54.969: INFO: Waiting for pod downwardapi-volume-12c9a8c2-e342-4e46-9ba8-9d13444b4b45 to disappear May 1 16:08:54.973: INFO: Pod downwardapi-volume-12c9a8c2-e342-4e46-9ba8-9d13444b4b45 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:08:54.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7057" for this suite. May 1 16:09:01.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:09:01.204: INFO: namespace projected-7057 deletion completed in 6.227784119s • [SLOW TEST:12.364 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:09:01.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-1a7bf2d6-5170-44c0-b575-1c23a9327856 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:09:08.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2788" for this suite. May 1 16:09:30.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:09:30.759: INFO: namespace configmap-2788 deletion completed in 22.141426619s • [SLOW TEST:29.555 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:09:30.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0501 16:09:40.904649 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 16:09:40.904: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:09:40.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2434" for this suite. May 1 16:09:46.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:09:47.004: INFO: namespace gc-2434 deletion completed in 6.096565314s • [SLOW TEST:16.245 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:09:47.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-0360d2a1-8688-499f-a3a1-0ce3501aff06 STEP: Creating a pod to test consume configMaps May 1 16:09:47.138: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bb0d198c-31f6-4fa1-be52-8903c5974796" in namespace "projected-5821" to be "success or failure" May 1 16:09:47.149: INFO: Pod "pod-projected-configmaps-bb0d198c-31f6-4fa1-be52-8903c5974796": Phase="Pending", Reason="", readiness=false. Elapsed: 10.406232ms May 1 16:09:49.153: INFO: Pod "pod-projected-configmaps-bb0d198c-31f6-4fa1-be52-8903c5974796": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014844769s May 1 16:09:51.157: INFO: Pod "pod-projected-configmaps-bb0d198c-31f6-4fa1-be52-8903c5974796": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018800316s STEP: Saw pod success May 1 16:09:51.157: INFO: Pod "pod-projected-configmaps-bb0d198c-31f6-4fa1-be52-8903c5974796" satisfied condition "success or failure" May 1 16:09:51.160: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-bb0d198c-31f6-4fa1-be52-8903c5974796 container projected-configmap-volume-test: STEP: delete the pod May 1 16:09:51.449: INFO: Waiting for pod pod-projected-configmaps-bb0d198c-31f6-4fa1-be52-8903c5974796 to disappear May 1 16:09:51.650: INFO: Pod pod-projected-configmaps-bb0d198c-31f6-4fa1-be52-8903c5974796 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:09:51.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5821" for this suite. May 1 16:09:57.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:09:57.755: INFO: namespace projected-5821 deletion completed in 6.101313485s • [SLOW TEST:10.751 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:09:57.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 1 16:10:10.204: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:10:10.264: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:10:12.264: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:10:12.357: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:10:14.265: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:10:14.269: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:10:16.264: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:10:16.268: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:10:18.265: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:10:18.285: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:10:20.264: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:10:20.269: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:10:22.264: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:10:22.298: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:10:24.264: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:10:24.268: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:10:26.264: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:10:26.395: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:10:28.265: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:10:28.269: INFO: Pod pod-with-poststart-exec-hook still exists May 1 16:10:30.265: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 16:10:30.270: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:10:30.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3651" for this suite. May 1 16:10:54.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:10:54.372: INFO: namespace container-lifecycle-hook-3651 deletion completed in 24.098902392s • [SLOW TEST:56.616 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:10:54.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 1 16:10:59.469: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:11:00.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8028" for this suite. May 1 16:11:24.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:11:24.591: INFO: namespace replicaset-8028 deletion completed in 24.087720353s • [SLOW TEST:30.219 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:11:24.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 16:11:24.919: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9b7f7008-3636-4455-b387-e6fb29407ea1", Controller:(*bool)(0xc0021337f2), BlockOwnerDeletion:(*bool)(0xc0021337f3)}} May 1 16:11:24.980: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ec540318-1fc5-458d-9659-b4314efe302b", Controller:(*bool)(0xc002b18ea2), BlockOwnerDeletion:(*bool)(0xc002b18ea3)}} May 1 16:11:24.984: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"92fe5184-16ab-4094-b8c0-7a19df750828", Controller:(*bool)(0xc00213397a), BlockOwnerDeletion:(*bool)(0xc00213397b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:11:30.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7908" for this suite. May 1 16:11:36.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:11:36.329: INFO: namespace gc-7908 deletion completed in 6.27789223s • [SLOW TEST:11.738 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:11:36.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0501 16:11:48.364964 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 16:11:48.365: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:11:48.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8630" for this suite. May 1 16:12:02.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:12:02.458: INFO: namespace gc-8630 deletion completed in 14.090580161s • [SLOW TEST:26.127 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:12:02.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:12:09.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9233" for this suite. May 1 16:12:49.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:12:49.977: INFO: namespace kubelet-test-9233 deletion completed in 40.55206581s • [SLOW TEST:47.519 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:12:49.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 16:12:57.013: INFO: Waiting up to 5m0s for pod "client-envvars-9da26fca-8949-4472-becc-bfea69c5016a" in namespace "pods-4248" to be "success or failure" May 1 16:12:57.036: INFO: Pod "client-envvars-9da26fca-8949-4472-becc-bfea69c5016a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.441713ms May 1 16:12:59.039: INFO: Pod "client-envvars-9da26fca-8949-4472-becc-bfea69c5016a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025400552s May 1 16:13:01.043: INFO: Pod "client-envvars-9da26fca-8949-4472-becc-bfea69c5016a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030006624s STEP: Saw pod success May 1 16:13:01.043: INFO: Pod "client-envvars-9da26fca-8949-4472-becc-bfea69c5016a" satisfied condition "success or failure" May 1 16:13:01.047: INFO: Trying to get logs from node iruya-worker pod client-envvars-9da26fca-8949-4472-becc-bfea69c5016a container env3cont: STEP: delete the pod May 1 16:13:01.070: INFO: Waiting for pod client-envvars-9da26fca-8949-4472-becc-bfea69c5016a to disappear May 1 16:13:01.095: INFO: Pod client-envvars-9da26fca-8949-4472-becc-bfea69c5016a no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:13:01.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4248" for this suite. May 1 16:13:43.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:13:43.445: INFO: namespace pods-4248 deletion completed in 42.346427921s • [SLOW TEST:53.467 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:13:43.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 1 16:13:44.677: INFO: Waiting up to 5m0s for pod "pod-352ad6c3-905d-4e57-9935-b3de5f515062" in namespace "emptydir-7841" to be "success or failure" May 1 16:13:44.699: INFO: Pod "pod-352ad6c3-905d-4e57-9935-b3de5f515062": Phase="Pending", Reason="", readiness=false. Elapsed: 21.753894ms May 1 16:13:46.821: INFO: Pod "pod-352ad6c3-905d-4e57-9935-b3de5f515062": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144174383s May 1 16:13:48.839: INFO: Pod "pod-352ad6c3-905d-4e57-9935-b3de5f515062": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161644744s May 1 16:13:50.879: INFO: Pod "pod-352ad6c3-905d-4e57-9935-b3de5f515062": Phase="Pending", Reason="", readiness=false. Elapsed: 6.201547473s May 1 16:13:52.977: INFO: Pod "pod-352ad6c3-905d-4e57-9935-b3de5f515062": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.299810169s STEP: Saw pod success May 1 16:13:52.977: INFO: Pod "pod-352ad6c3-905d-4e57-9935-b3de5f515062" satisfied condition "success or failure" May 1 16:13:52.980: INFO: Trying to get logs from node iruya-worker2 pod pod-352ad6c3-905d-4e57-9935-b3de5f515062 container test-container: STEP: delete the pod May 1 16:13:53.209: INFO: Waiting for pod pod-352ad6c3-905d-4e57-9935-b3de5f515062 to disappear May 1 16:13:53.226: INFO: Pod pod-352ad6c3-905d-4e57-9935-b3de5f515062 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:13:53.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7841" for this suite. May 1 16:13:59.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:13:59.512: INFO: namespace emptydir-7841 deletion completed in 6.283657739s • [SLOW TEST:16.066 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:13:59.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 1 16:13:59.928: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:14:10.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6707" for this suite. May 1 16:14:18.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:14:18.674: INFO: namespace init-container-6707 deletion completed in 8.165943276s • [SLOW TEST:19.161 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:14:18.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-02899826-fede-490e-9e64-8400f01d0d4b in namespace container-probe-2985 May 1 16:14:24.900: INFO: Started pod busybox-02899826-fede-490e-9e64-8400f01d0d4b in namespace container-probe-2985 STEP: checking the pod's current state and verifying that restartCount is present May 1 16:14:24.902: INFO: Initial restart count of pod busybox-02899826-fede-490e-9e64-8400f01d0d4b is 0 May 1 16:15:21.395: INFO: Restart count of pod container-probe-2985/busybox-02899826-fede-490e-9e64-8400f01d0d4b is now 1 (56.493151818s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:15:22.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2985" for this suite. May 1 16:15:30.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:15:30.648: INFO: namespace container-probe-2985 deletion completed in 8.328773627s • [SLOW TEST:71.974 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:15:30.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container May 1 16:15:37.724: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8232 pod-service-account-b608741c-efb7-4b3b-8ccd-126fe031c3cc -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 1 16:15:51.611: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8232 pod-service-account-b608741c-efb7-4b3b-8ccd-126fe031c3cc -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 1 16:15:51.809: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8232 pod-service-account-b608741c-efb7-4b3b-8ccd-126fe031c3cc -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:15:51.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8232" for this suite. May 1 16:15:58.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:15:58.688: INFO: namespace svcaccounts-8232 deletion completed in 6.684530968s • [SLOW TEST:28.039 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:15:58.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 1 16:15:59.266: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3169,SelfLink:/api/v1/namespaces/watch-3169/configmaps/e2e-watch-test-configmap-a,UID:8a3f5f18-506d-4540-aa20-157180bf09d0,ResourceVersion:8464566,Generation:0,CreationTimestamp:2020-05-01 16:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 1 16:15:59.266: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3169,SelfLink:/api/v1/namespaces/watch-3169/configmaps/e2e-watch-test-configmap-a,UID:8a3f5f18-506d-4540-aa20-157180bf09d0,ResourceVersion:8464566,Generation:0,CreationTimestamp:2020-05-01 16:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 1 16:16:09.298: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3169,SelfLink:/api/v1/namespaces/watch-3169/configmaps/e2e-watch-test-configmap-a,UID:8a3f5f18-506d-4540-aa20-157180bf09d0,ResourceVersion:8464586,Generation:0,CreationTimestamp:2020-05-01 16:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 1 16:16:09.298: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3169,SelfLink:/api/v1/namespaces/watch-3169/configmaps/e2e-watch-test-configmap-a,UID:8a3f5f18-506d-4540-aa20-157180bf09d0,ResourceVersion:8464586,Generation:0,CreationTimestamp:2020-05-01 16:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 1 16:16:19.304: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3169,SelfLink:/api/v1/namespaces/watch-3169/configmaps/e2e-watch-test-configmap-a,UID:8a3f5f18-506d-4540-aa20-157180bf09d0,ResourceVersion:8464606,Generation:0,CreationTimestamp:2020-05-01 16:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 1 16:16:19.304: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3169,SelfLink:/api/v1/namespaces/watch-3169/configmaps/e2e-watch-test-configmap-a,UID:8a3f5f18-506d-4540-aa20-157180bf09d0,ResourceVersion:8464606,Generation:0,CreationTimestamp:2020-05-01 16:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 1 16:16:29.312: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3169,SelfLink:/api/v1/namespaces/watch-3169/configmaps/e2e-watch-test-configmap-a,UID:8a3f5f18-506d-4540-aa20-157180bf09d0,ResourceVersion:8464625,Generation:0,CreationTimestamp:2020-05-01 16:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 1 16:16:29.312: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3169,SelfLink:/api/v1/namespaces/watch-3169/configmaps/e2e-watch-test-configmap-a,UID:8a3f5f18-506d-4540-aa20-157180bf09d0,ResourceVersion:8464625,Generation:0,CreationTimestamp:2020-05-01 16:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 1 16:16:39.321: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3169,SelfLink:/api/v1/namespaces/watch-3169/configmaps/e2e-watch-test-configmap-b,UID:54f6e8be-565f-4a25-9089-ec4861019cb3,ResourceVersion:8464645,Generation:0,CreationTimestamp:2020-05-01 16:16:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 1 16:16:39.321: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3169,SelfLink:/api/v1/namespaces/watch-3169/configmaps/e2e-watch-test-configmap-b,UID:54f6e8be-565f-4a25-9089-ec4861019cb3,ResourceVersion:8464645,Generation:0,CreationTimestamp:2020-05-01 16:16:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 1 16:16:49.384: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3169,SelfLink:/api/v1/namespaces/watch-3169/configmaps/e2e-watch-test-configmap-b,UID:54f6e8be-565f-4a25-9089-ec4861019cb3,ResourceVersion:8464666,Generation:0,CreationTimestamp:2020-05-01 16:16:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 1 16:16:49.384: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3169,SelfLink:/api/v1/namespaces/watch-3169/configmaps/e2e-watch-test-configmap-b,UID:54f6e8be-565f-4a25-9089-ec4861019cb3,ResourceVersion:8464666,Generation:0,CreationTimestamp:2020-05-01 16:16:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:16:59.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3169" for this suite. May 1 16:17:07.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:17:07.787: INFO: namespace watch-3169 deletion completed in 8.39324128s • [SLOW TEST:69.098 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:17:07.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 16:17:08.131: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:17:14.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8466" for this suite. May 1 16:17:54.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:17:54.415: INFO: namespace pods-8466 deletion completed in 40.220051136s • [SLOW TEST:46.628 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:17:54.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4059.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4059.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4059.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4059.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4059.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4059.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4059.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4059.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4059.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4059.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4059.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 67.181.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.181.67_udp@PTR;check="$$(dig +tcp +noall +answer +search 67.181.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.181.67_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4059.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4059.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4059.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4059.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4059.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4059.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4059.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4059.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4059.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4059.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4059.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 67.181.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.181.67_udp@PTR;check="$$(dig +tcp +noall +answer +search 67.181.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.181.67_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 16:18:05.060: INFO: Unable to read wheezy_udp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:05.063: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:05.067: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:05.070: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:05.089: INFO: Unable to read jessie_udp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:05.091: INFO: Unable to read jessie_tcp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:05.094: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:05.097: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:05.114: INFO: Lookups using dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497 failed for: [wheezy_udp@dns-test-service.dns-4059.svc.cluster.local wheezy_tcp@dns-test-service.dns-4059.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local jessie_udp@dns-test-service.dns-4059.svc.cluster.local jessie_tcp@dns-test-service.dns-4059.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local] May 1 16:18:10.119: INFO: Unable to read wheezy_udp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:10.123: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:10.126: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:10.129: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:10.158: INFO: Unable to read jessie_udp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:10.160: INFO: Unable to read jessie_tcp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:10.163: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:10.166: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:10.184: INFO: Lookups using dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497 failed for: [wheezy_udp@dns-test-service.dns-4059.svc.cluster.local wheezy_tcp@dns-test-service.dns-4059.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local jessie_udp@dns-test-service.dns-4059.svc.cluster.local jessie_tcp@dns-test-service.dns-4059.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local] May 1 16:18:15.119: INFO: Unable to read wheezy_udp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:15.122: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:15.124: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:15.127: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:15.144: INFO: Unable to read jessie_udp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:15.147: INFO: Unable to read jessie_tcp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:15.150: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:15.153: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:15.168: INFO: Lookups using dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497 failed for: [wheezy_udp@dns-test-service.dns-4059.svc.cluster.local wheezy_tcp@dns-test-service.dns-4059.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local jessie_udp@dns-test-service.dns-4059.svc.cluster.local jessie_tcp@dns-test-service.dns-4059.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local] May 1 16:18:20.119: INFO: Unable to read wheezy_udp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:20.121: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:20.124: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:20.127: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:20.147: INFO: Unable to read jessie_udp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:20.150: INFO: Unable to read jessie_tcp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:20.153: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:20.155: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:20.169: INFO: Lookups using dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497 failed for: [wheezy_udp@dns-test-service.dns-4059.svc.cluster.local wheezy_tcp@dns-test-service.dns-4059.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local jessie_udp@dns-test-service.dns-4059.svc.cluster.local jessie_tcp@dns-test-service.dns-4059.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local] May 1 16:18:25.119: INFO: Unable to read wheezy_udp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:25.122: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:25.126: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:25.129: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:25.148: INFO: Unable to read jessie_udp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:25.151: INFO: Unable to read jessie_tcp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:25.153: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:25.156: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:25.173: INFO: Lookups using dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497 failed for: [wheezy_udp@dns-test-service.dns-4059.svc.cluster.local wheezy_tcp@dns-test-service.dns-4059.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local jessie_udp@dns-test-service.dns-4059.svc.cluster.local jessie_tcp@dns-test-service.dns-4059.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local] May 1 16:18:30.179: INFO: Unable to read wheezy_udp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:30.183: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:30.186: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:30.190: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:30.212: INFO: Unable to read jessie_udp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:30.215: INFO: Unable to read jessie_tcp@dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:30.218: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:30.222: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local from pod dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497: the server could not find the requested resource (get pods dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497) May 1 16:18:30.241: INFO: Lookups using dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497 failed for: [wheezy_udp@dns-test-service.dns-4059.svc.cluster.local wheezy_tcp@dns-test-service.dns-4059.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local jessie_udp@dns-test-service.dns-4059.svc.cluster.local jessie_tcp@dns-test-service.dns-4059.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4059.svc.cluster.local] May 1 16:18:35.174: INFO: DNS probes using dns-4059/dns-test-0bc5b03a-1312-4053-90ba-85cc68c82497 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:18:35.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4059" for this suite. May 1 16:18:43.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:18:43.830: INFO: namespace dns-4059 deletion completed in 8.120400193s • [SLOW TEST:49.415 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:18:43.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8291.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8291.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8291.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8291.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8291.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8291.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 16:18:50.619: INFO: DNS probes using dns-8291/dns-test-df464462-dc24-461b-90c0-2e7137eb22bb succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:18:50.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8291" for this suite. May 1 16:18:57.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:18:57.814: INFO: namespace dns-8291 deletion completed in 7.054276471s • [SLOW TEST:13.983 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:18:57.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 1 16:18:57.892: INFO: Waiting up to 5m0s for pod "pod-ee164fb6-84f9-4a86-93e8-6e5b12bbb3bb" in namespace "emptydir-3011" to be "success or failure" May 1 16:18:57.963: INFO: Pod "pod-ee164fb6-84f9-4a86-93e8-6e5b12bbb3bb": Phase="Pending", Reason="", readiness=false. Elapsed: 70.571799ms May 1 16:18:59.967: INFO: Pod "pod-ee164fb6-84f9-4a86-93e8-6e5b12bbb3bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074672387s May 1 16:19:01.972: INFO: Pod "pod-ee164fb6-84f9-4a86-93e8-6e5b12bbb3bb": Phase="Running", Reason="", readiness=true. Elapsed: 4.079072595s May 1 16:19:03.976: INFO: Pod "pod-ee164fb6-84f9-4a86-93e8-6e5b12bbb3bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083262562s STEP: Saw pod success May 1 16:19:03.976: INFO: Pod "pod-ee164fb6-84f9-4a86-93e8-6e5b12bbb3bb" satisfied condition "success or failure" May 1 16:19:03.978: INFO: Trying to get logs from node iruya-worker pod pod-ee164fb6-84f9-4a86-93e8-6e5b12bbb3bb container test-container: STEP: delete the pod May 1 16:19:04.049: INFO: Waiting for pod pod-ee164fb6-84f9-4a86-93e8-6e5b12bbb3bb to disappear May 1 16:19:04.056: INFO: Pod pod-ee164fb6-84f9-4a86-93e8-6e5b12bbb3bb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:19:04.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3011" for this suite. May 1 16:19:10.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:19:10.152: INFO: namespace emptydir-3011 deletion completed in 6.09173394s • [SLOW TEST:12.337 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:19:10.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc May 1 16:19:10.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9270' May 1 16:19:10.536: INFO: stderr: "" May 1 16:19:10.536: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. May 1 16:19:11.541: INFO: Selector matched 1 pods for map[app:redis] May 1 16:19:11.541: INFO: Found 0 / 1 May 1 16:19:12.540: INFO: Selector matched 1 pods for map[app:redis] May 1 16:19:12.540: INFO: Found 0 / 1 May 1 16:19:13.540: INFO: Selector matched 1 pods for map[app:redis] May 1 16:19:13.540: INFO: Found 0 / 1 May 1 16:19:14.540: INFO: Selector matched 1 pods for map[app:redis] May 1 16:19:14.540: INFO: Found 1 / 1 May 1 16:19:14.540: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 1 16:19:14.544: INFO: Selector matched 1 pods for map[app:redis] May 1 16:19:14.544: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 1 16:19:14.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-clb95 redis-master --namespace=kubectl-9270' May 1 16:19:14.653: INFO: stderr: "" May 1 16:19:14.653: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 May 16:19:13.467 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 May 16:19:13.467 # Server started, Redis version 3.2.12\n1:M 01 May 16:19:13.468 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 May 16:19:13.468 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 1 16:19:14.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-clb95 redis-master --namespace=kubectl-9270 --tail=1' May 1 16:19:14.767: INFO: stderr: "" May 1 16:19:14.767: INFO: stdout: "1:M 01 May 16:19:13.468 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 1 16:19:14.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-clb95 redis-master --namespace=kubectl-9270 --limit-bytes=1' May 1 16:19:14.864: INFO: stderr: "" May 1 16:19:14.864: INFO: stdout: " " STEP: exposing timestamps May 1 16:19:14.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-clb95 redis-master --namespace=kubectl-9270 --tail=1 --timestamps' May 1 16:19:14.951: INFO: stderr: "" May 1 16:19:14.951: INFO: stdout: "2020-05-01T16:19:13.468093501Z 1:M 01 May 16:19:13.468 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 1 16:19:17.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-clb95 redis-master --namespace=kubectl-9270 --since=1s' May 1 16:19:17.552: INFO: stderr: "" May 1 16:19:17.552: INFO: stdout: "" May 1 16:19:17.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-clb95 redis-master --namespace=kubectl-9270 --since=24h' May 1 16:19:17.651: INFO: stderr: "" May 1 16:19:17.651: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 May 16:19:13.467 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 May 16:19:13.467 # Server started, Redis version 3.2.12\n1:M 01 May 16:19:13.468 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 May 16:19:13.468 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources May 1 16:19:17.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9270' May 1 16:19:17.758: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 16:19:17.758: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 1 16:19:17.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-9270' May 1 16:19:17.860: INFO: stderr: "No resources found.\n" May 1 16:19:17.860: INFO: stdout: "" May 1 16:19:17.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-9270 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 1 16:19:17.997: INFO: stderr: "" May 1 16:19:17.997: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:19:17.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9270" for this suite. May 1 16:19:40.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:19:40.099: INFO: namespace kubectl-9270 deletion completed in 22.089264395s • [SLOW TEST:29.947 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:19:40.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-7523 STEP: creating a selector STEP: Creating the service pods in kubernetes May 1 16:19:40.258: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 1 16:20:08.425: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.55:8080/dial?request=hostName&protocol=udp&host=10.244.1.183&port=8081&tries=1'] Namespace:pod-network-test-7523 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 16:20:08.425: INFO: >>> kubeConfig: /root/.kube/config I0501 16:20:08.458503 6 log.go:172] (0xc0025246e0) (0xc002c34fa0) Create stream I0501 16:20:08.458534 6 log.go:172] (0xc0025246e0) (0xc002c34fa0) Stream added, broadcasting: 1 I0501 16:20:08.460533 6 log.go:172] (0xc0025246e0) Reply frame received for 1 I0501 16:20:08.460581 6 log.go:172] (0xc0025246e0) (0xc002c35040) Create stream I0501 16:20:08.460591 6 log.go:172] (0xc0025246e0) (0xc002c35040) Stream added, broadcasting: 3 I0501 16:20:08.461595 6 log.go:172] (0xc0025246e0) Reply frame received for 3 I0501 16:20:08.461637 6 log.go:172] (0xc0025246e0) (0xc0020b6960) Create stream I0501 16:20:08.461653 6 log.go:172] (0xc0025246e0) (0xc0020b6960) Stream added, broadcasting: 5 I0501 16:20:08.462543 6 log.go:172] (0xc0025246e0) Reply frame received for 5 I0501 16:20:08.530372 6 log.go:172] (0xc0025246e0) Data frame received for 3 I0501 16:20:08.530414 6 log.go:172] (0xc002c35040) (3) Data frame handling I0501 16:20:08.530441 6 log.go:172] (0xc002c35040) (3) Data frame sent I0501 16:20:08.530723 6 log.go:172] (0xc0025246e0) Data frame received for 5 I0501 16:20:08.530749 6 log.go:172] (0xc0020b6960) (5) Data frame handling I0501 16:20:08.531002 6 log.go:172] (0xc0025246e0) Data frame received for 3 I0501 16:20:08.531033 6 log.go:172] (0xc002c35040) (3) Data frame handling I0501 16:20:08.533055 6 log.go:172] (0xc0025246e0) Data frame received for 1 I0501 16:20:08.533071 6 log.go:172] (0xc002c34fa0) (1) Data frame handling I0501 16:20:08.533081 6 log.go:172] (0xc002c34fa0) (1) Data frame sent I0501 16:20:08.533089 6 log.go:172] (0xc0025246e0) (0xc002c34fa0) Stream removed, broadcasting: 1 I0501 16:20:08.533104 6 log.go:172] (0xc0025246e0) Go away received I0501 16:20:08.533450 6 log.go:172] (0xc0025246e0) (0xc002c34fa0) Stream removed, broadcasting: 1 I0501 16:20:08.533477 6 log.go:172] (0xc0025246e0) (0xc002c35040) Stream removed, broadcasting: 3 I0501 16:20:08.533492 6 log.go:172] (0xc0025246e0) (0xc0020b6960) Stream removed, broadcasting: 5 May 1 16:20:08.533: INFO: Waiting for endpoints: map[] May 1 16:20:08.537: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.55:8080/dial?request=hostName&protocol=udp&host=10.244.2.54&port=8081&tries=1'] Namespace:pod-network-test-7523 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 16:20:08.537: INFO: >>> kubeConfig: /root/.kube/config I0501 16:20:08.562836 6 log.go:172] (0xc002525600) (0xc002c35220) Create stream I0501 16:20:08.562861 6 log.go:172] (0xc002525600) (0xc002c35220) Stream added, broadcasting: 1 I0501 16:20:08.565002 6 log.go:172] (0xc002525600) Reply frame received for 1 I0501 16:20:08.565034 6 log.go:172] (0xc002525600) (0xc002c352c0) Create stream I0501 16:20:08.565046 6 log.go:172] (0xc002525600) (0xc002c352c0) Stream added, broadcasting: 3 I0501 16:20:08.566032 6 log.go:172] (0xc002525600) Reply frame received for 3 I0501 16:20:08.566074 6 log.go:172] (0xc002525600) (0xc001a479a0) Create stream I0501 16:20:08.566082 6 log.go:172] (0xc002525600) (0xc001a479a0) Stream added, broadcasting: 5 I0501 16:20:08.567006 6 log.go:172] (0xc002525600) Reply frame received for 5 I0501 16:20:08.633366 6 log.go:172] (0xc002525600) Data frame received for 3 I0501 16:20:08.633397 6 log.go:172] (0xc002c352c0) (3) Data frame handling I0501 16:20:08.633415 6 log.go:172] (0xc002c352c0) (3) Data frame sent I0501 16:20:08.634021 6 log.go:172] (0xc002525600) Data frame received for 5 I0501 16:20:08.634051 6 log.go:172] (0xc001a479a0) (5) Data frame handling I0501 16:20:08.634347 6 log.go:172] (0xc002525600) Data frame received for 3 I0501 16:20:08.634383 6 log.go:172] (0xc002c352c0) (3) Data frame handling I0501 16:20:08.635602 6 log.go:172] (0xc002525600) Data frame received for 1 I0501 16:20:08.635645 6 log.go:172] (0xc002c35220) (1) Data frame handling I0501 16:20:08.635674 6 log.go:172] (0xc002c35220) (1) Data frame sent I0501 16:20:08.635696 6 log.go:172] (0xc002525600) (0xc002c35220) Stream removed, broadcasting: 1 I0501 16:20:08.635723 6 log.go:172] (0xc002525600) Go away received I0501 16:20:08.635864 6 log.go:172] (0xc002525600) (0xc002c35220) Stream removed, broadcasting: 1 I0501 16:20:08.635890 6 log.go:172] (0xc002525600) (0xc002c352c0) Stream removed, broadcasting: 3 I0501 16:20:08.635905 6 log.go:172] (0xc002525600) (0xc001a479a0) Stream removed, broadcasting: 5 May 1 16:20:08.635: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:20:08.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7523" for this suite. May 1 16:20:32.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:20:32.727: INFO: namespace pod-network-test-7523 deletion completed in 24.087228296s • [SLOW TEST:52.628 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:20:32.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:21:30.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8449" for this suite. May 1 16:21:39.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:21:40.141: INFO: namespace container-runtime-8449 deletion completed in 9.280351491s • [SLOW TEST:67.414 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:21:40.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 16:21:41.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6319' May 1 16:21:41.118: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 1 16:21:41.118: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 May 1 16:21:41.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-6319' May 1 16:21:41.922: INFO: stderr: "" May 1 16:21:41.922: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:21:41.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6319" for this suite. May 1 16:21:48.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:21:48.272: INFO: namespace kubectl-6319 deletion completed in 6.290395458s • [SLOW TEST:8.129 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:21:48.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 1 16:21:48.342: INFO: Waiting up to 5m0s for pod "pod-bfaec052-b707-49fa-b0b8-b9315cc73caf" in namespace "emptydir-3677" to be "success or failure" May 1 16:21:48.493: INFO: Pod "pod-bfaec052-b707-49fa-b0b8-b9315cc73caf": Phase="Pending", Reason="", readiness=false. Elapsed: 151.301837ms May 1 16:21:50.570: INFO: Pod "pod-bfaec052-b707-49fa-b0b8-b9315cc73caf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228506528s May 1 16:21:52.575: INFO: Pod "pod-bfaec052-b707-49fa-b0b8-b9315cc73caf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233132108s May 1 16:21:54.579: INFO: Pod "pod-bfaec052-b707-49fa-b0b8-b9315cc73caf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.236917583s May 1 16:21:56.660: INFO: Pod "pod-bfaec052-b707-49fa-b0b8-b9315cc73caf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.318057155s STEP: Saw pod success May 1 16:21:56.660: INFO: Pod "pod-bfaec052-b707-49fa-b0b8-b9315cc73caf" satisfied condition "success or failure" May 1 16:21:56.663: INFO: Trying to get logs from node iruya-worker pod pod-bfaec052-b707-49fa-b0b8-b9315cc73caf container test-container: STEP: delete the pod May 1 16:21:57.170: INFO: Waiting for pod pod-bfaec052-b707-49fa-b0b8-b9315cc73caf to disappear May 1 16:21:57.319: INFO: Pod pod-bfaec052-b707-49fa-b0b8-b9315cc73caf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:21:57.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3677" for this suite. May 1 16:22:05.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:22:05.477: INFO: namespace emptydir-3677 deletion completed in 8.154517809s • [SLOW TEST:17.205 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:22:05.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 1 16:22:05.527: INFO: Waiting up to 5m0s for pod "downwardapi-volume-435d8849-c737-4e7e-af83-28294f53691f" in namespace "downward-api-7050" to be "success or failure" May 1 16:22:05.541: INFO: Pod "downwardapi-volume-435d8849-c737-4e7e-af83-28294f53691f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.957946ms May 1 16:22:07.545: INFO: Pod "downwardapi-volume-435d8849-c737-4e7e-af83-28294f53691f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018430665s May 1 16:22:09.549: INFO: Pod "downwardapi-volume-435d8849-c737-4e7e-af83-28294f53691f": Phase="Running", Reason="", readiness=true. Elapsed: 4.022793926s May 1 16:22:11.703: INFO: Pod "downwardapi-volume-435d8849-c737-4e7e-af83-28294f53691f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.176085984s STEP: Saw pod success May 1 16:22:11.703: INFO: Pod "downwardapi-volume-435d8849-c737-4e7e-af83-28294f53691f" satisfied condition "success or failure" May 1 16:22:11.706: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-435d8849-c737-4e7e-af83-28294f53691f container client-container: STEP: delete the pod May 1 16:22:12.011: INFO: Waiting for pod downwardapi-volume-435d8849-c737-4e7e-af83-28294f53691f to disappear May 1 16:22:12.044: INFO: Pod downwardapi-volume-435d8849-c737-4e7e-af83-28294f53691f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:22:12.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7050" for this suite. May 1 16:22:18.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:22:18.390: INFO: namespace downward-api-7050 deletion completed in 6.341375845s • [SLOW TEST:12.912 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:22:18.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-jt8f STEP: Creating a pod to test atomic-volume-subpath May 1 16:22:18.488: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jt8f" in namespace "subpath-536" to be "success or failure" May 1 16:22:18.494: INFO: Pod "pod-subpath-test-configmap-jt8f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.592754ms May 1 16:22:20.498: INFO: Pod "pod-subpath-test-configmap-jt8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010008349s May 1 16:22:22.503: INFO: Pod "pod-subpath-test-configmap-jt8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014841107s May 1 16:22:24.508: INFO: Pod "pod-subpath-test-configmap-jt8f": Phase="Running", Reason="", readiness=true. Elapsed: 6.019686025s May 1 16:22:26.512: INFO: Pod "pod-subpath-test-configmap-jt8f": Phase="Running", Reason="", readiness=true. Elapsed: 8.024066001s May 1 16:22:28.516: INFO: Pod "pod-subpath-test-configmap-jt8f": Phase="Running", Reason="", readiness=true. Elapsed: 10.027787057s May 1 16:22:30.520: INFO: Pod "pod-subpath-test-configmap-jt8f": Phase="Running", Reason="", readiness=true. Elapsed: 12.031679901s May 1 16:22:32.523: INFO: Pod "pod-subpath-test-configmap-jt8f": Phase="Running", Reason="", readiness=true. Elapsed: 14.035411773s May 1 16:22:34.527: INFO: Pod "pod-subpath-test-configmap-jt8f": Phase="Running", Reason="", readiness=true. Elapsed: 16.039068176s May 1 16:22:36.531: INFO: Pod "pod-subpath-test-configmap-jt8f": Phase="Running", Reason="", readiness=true. Elapsed: 18.043251912s May 1 16:22:38.535: INFO: Pod "pod-subpath-test-configmap-jt8f": Phase="Running", Reason="", readiness=true. Elapsed: 20.047231417s May 1 16:22:40.870: INFO: Pod "pod-subpath-test-configmap-jt8f": Phase="Running", Reason="", readiness=true. Elapsed: 22.381769793s May 1 16:22:42.875: INFO: Pod "pod-subpath-test-configmap-jt8f": Phase="Running", Reason="", readiness=true. Elapsed: 24.386794425s May 1 16:22:44.879: INFO: Pod "pod-subpath-test-configmap-jt8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.390841811s STEP: Saw pod success May 1 16:22:44.879: INFO: Pod "pod-subpath-test-configmap-jt8f" satisfied condition "success or failure" May 1 16:22:44.882: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-jt8f container test-container-subpath-configmap-jt8f: STEP: delete the pod May 1 16:22:44.985: INFO: Waiting for pod pod-subpath-test-configmap-jt8f to disappear May 1 16:22:45.030: INFO: Pod pod-subpath-test-configmap-jt8f no longer exists STEP: Deleting pod pod-subpath-test-configmap-jt8f May 1 16:22:45.030: INFO: Deleting pod "pod-subpath-test-configmap-jt8f" in namespace "subpath-536" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:22:45.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-536" for this suite. May 1 16:22:51.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:22:51.140: INFO: namespace subpath-536 deletion completed in 6.104803715s • [SLOW TEST:32.750 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:22:51.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 1 16:22:51.548: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ba2cbaf-96af-494e-a914-530c913dacec" in namespace "projected-4628" to be "success or failure" May 1 16:22:51.587: INFO: Pod "downwardapi-volume-1ba2cbaf-96af-494e-a914-530c913dacec": Phase="Pending", Reason="", readiness=false. Elapsed: 39.578283ms May 1 16:22:53.937: INFO: Pod "downwardapi-volume-1ba2cbaf-96af-494e-a914-530c913dacec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.389734605s May 1 16:22:55.997: INFO: Pod "downwardapi-volume-1ba2cbaf-96af-494e-a914-530c913dacec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.448755028s May 1 16:22:58.314: INFO: Pod "downwardapi-volume-1ba2cbaf-96af-494e-a914-530c913dacec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.766378406s STEP: Saw pod success May 1 16:22:58.314: INFO: Pod "downwardapi-volume-1ba2cbaf-96af-494e-a914-530c913dacec" satisfied condition "success or failure" May 1 16:22:58.511: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-1ba2cbaf-96af-494e-a914-530c913dacec container client-container: STEP: delete the pod May 1 16:22:58.723: INFO: Waiting for pod downwardapi-volume-1ba2cbaf-96af-494e-a914-530c913dacec to disappear May 1 16:22:58.758: INFO: Pod downwardapi-volume-1ba2cbaf-96af-494e-a914-530c913dacec no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:22:58.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4628" for this suite. May 1 16:23:04.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:23:04.970: INFO: namespace projected-4628 deletion completed in 6.208997045s • [SLOW TEST:13.830 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:23:04.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 1 16:23:05.061: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:23:16.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5326" for this suite. May 1 16:23:40.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:23:40.559: INFO: namespace init-container-5326 deletion completed in 24.31489613s • [SLOW TEST:35.589 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:23:40.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 1 16:23:40.673: INFO: Waiting up to 5m0s for pod "pod-9c921a09-6ac7-45e3-b900-8e9604afb73f" in namespace "emptydir-34" to be "success or failure" May 1 16:23:40.677: INFO: Pod "pod-9c921a09-6ac7-45e3-b900-8e9604afb73f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.634326ms May 1 16:23:42.722: INFO: Pod "pod-9c921a09-6ac7-45e3-b900-8e9604afb73f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048086888s May 1 16:23:44.726: INFO: Pod "pod-9c921a09-6ac7-45e3-b900-8e9604afb73f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052142327s STEP: Saw pod success May 1 16:23:44.726: INFO: Pod "pod-9c921a09-6ac7-45e3-b900-8e9604afb73f" satisfied condition "success or failure" May 1 16:23:44.729: INFO: Trying to get logs from node iruya-worker pod pod-9c921a09-6ac7-45e3-b900-8e9604afb73f container test-container: STEP: delete the pod May 1 16:23:44.750: INFO: Waiting for pod pod-9c921a09-6ac7-45e3-b900-8e9604afb73f to disappear May 1 16:23:44.767: INFO: Pod pod-9c921a09-6ac7-45e3-b900-8e9604afb73f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:23:44.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-34" for this suite. May 1 16:23:50.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:23:50.916: INFO: namespace emptydir-34 deletion completed in 6.14517412s • [SLOW TEST:10.357 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:23:50.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:23:55.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8791" for this suite. May 1 16:24:35.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:24:35.704: INFO: namespace kubelet-test-8791 deletion completed in 40.652865894s • [SLOW TEST:44.787 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:24:35.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-5jzm STEP: Creating a pod to test atomic-volume-subpath May 1 16:24:36.471: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5jzm" in namespace "subpath-6172" to be "success or failure" May 1 16:24:36.843: INFO: Pod "pod-subpath-test-downwardapi-5jzm": Phase="Pending", Reason="", readiness=false. Elapsed: 371.967974ms May 1 16:24:38.846: INFO: Pod "pod-subpath-test-downwardapi-5jzm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.375622869s May 1 16:24:41.195: INFO: Pod "pod-subpath-test-downwardapi-5jzm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.723971437s May 1 16:24:43.243: INFO: Pod "pod-subpath-test-downwardapi-5jzm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.772313596s May 1 16:24:45.248: INFO: Pod "pod-subpath-test-downwardapi-5jzm": Phase="Running", Reason="", readiness=true. Elapsed: 8.776878184s May 1 16:24:47.251: INFO: Pod "pod-subpath-test-downwardapi-5jzm": Phase="Running", Reason="", readiness=true. Elapsed: 10.780180417s May 1 16:24:49.273: INFO: Pod "pod-subpath-test-downwardapi-5jzm": Phase="Running", Reason="", readiness=true. Elapsed: 12.8022911s May 1 16:24:51.345: INFO: Pod "pod-subpath-test-downwardapi-5jzm": Phase="Running", Reason="", readiness=true. Elapsed: 14.874130163s May 1 16:24:53.349: INFO: Pod "pod-subpath-test-downwardapi-5jzm": Phase="Running", Reason="", readiness=true. Elapsed: 16.877958502s May 1 16:24:55.399: INFO: Pod "pod-subpath-test-downwardapi-5jzm": Phase="Running", Reason="", readiness=true. Elapsed: 18.928252487s May 1 16:24:57.437: INFO: Pod "pod-subpath-test-downwardapi-5jzm": Phase="Running", Reason="", readiness=true. Elapsed: 20.966019851s May 1 16:24:59.513: INFO: Pod "pod-subpath-test-downwardapi-5jzm": Phase="Running", Reason="", readiness=true. Elapsed: 23.042177285s May 1 16:25:01.517: INFO: Pod "pod-subpath-test-downwardapi-5jzm": Phase="Running", Reason="", readiness=true. Elapsed: 25.045731166s May 1 16:25:03.520: INFO: Pod "pod-subpath-test-downwardapi-5jzm": Phase="Running", Reason="", readiness=true. Elapsed: 27.048988449s May 1 16:25:05.557: INFO: Pod "pod-subpath-test-downwardapi-5jzm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.086464765s STEP: Saw pod success May 1 16:25:05.557: INFO: Pod "pod-subpath-test-downwardapi-5jzm" satisfied condition "success or failure" May 1 16:25:05.599: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-5jzm container test-container-subpath-downwardapi-5jzm: STEP: delete the pod May 1 16:25:05.719: INFO: Waiting for pod pod-subpath-test-downwardapi-5jzm to disappear May 1 16:25:05.730: INFO: Pod pod-subpath-test-downwardapi-5jzm no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-5jzm May 1 16:25:05.730: INFO: Deleting pod "pod-subpath-test-downwardapi-5jzm" in namespace "subpath-6172" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:25:05.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6172" for this suite. May 1 16:25:11.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:25:11.993: INFO: namespace subpath-6172 deletion completed in 6.257771469s • [SLOW TEST:36.289 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:25:11.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token May 1 16:25:13.786: INFO: created pod pod-service-account-defaultsa May 1 16:25:13.786: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 1 16:25:13.884: INFO: created pod pod-service-account-mountsa May 1 16:25:13.884: INFO: pod pod-service-account-mountsa service account token volume mount: true May 1 16:25:13.923: INFO: created pod pod-service-account-nomountsa May 1 16:25:13.923: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 1 16:25:13.959: INFO: created pod pod-service-account-defaultsa-mountspec May 1 16:25:13.959: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 1 16:25:14.232: INFO: created pod pod-service-account-mountsa-mountspec May 1 16:25:14.232: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 1 16:25:14.514: INFO: created pod pod-service-account-nomountsa-mountspec May 1 16:25:14.514: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 1 16:25:14.520: INFO: created pod pod-service-account-defaultsa-nomountspec May 1 16:25:14.520: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 1 16:25:14.735: INFO: created pod pod-service-account-mountsa-nomountspec May 1 16:25:14.735: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 1 16:25:14.750: INFO: created pod pod-service-account-nomountsa-nomountspec May 1 16:25:14.750: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:25:14.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4726" for this suite. May 1 16:25:50.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:25:50.994: INFO: namespace svcaccounts-4726 deletion completed in 36.196070291s • [SLOW TEST:39.001 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:25:50.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:25:51.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6462" for this suite. May 1 16:25:57.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:25:57.219: INFO: namespace services-6462 deletion completed in 6.148598447s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.224 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:25:57.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-ce5456e3-e7f2-4760-8e63-ee76b9089095 STEP: Creating a pod to test consume configMaps May 1 16:25:57.300: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a7c9e3b5-4f90-4262-967f-c73405671e6e" in namespace "projected-2903" to be "success or failure" May 1 16:25:57.321: INFO: Pod "pod-projected-configmaps-a7c9e3b5-4f90-4262-967f-c73405671e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.878202ms May 1 16:25:59.325: INFO: Pod "pod-projected-configmaps-a7c9e3b5-4f90-4262-967f-c73405671e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025153297s May 1 16:26:01.329: INFO: Pod "pod-projected-configmaps-a7c9e3b5-4f90-4262-967f-c73405671e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029301883s May 1 16:26:03.333: INFO: Pod "pod-projected-configmaps-a7c9e3b5-4f90-4262-967f-c73405671e6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032896209s STEP: Saw pod success May 1 16:26:03.333: INFO: Pod "pod-projected-configmaps-a7c9e3b5-4f90-4262-967f-c73405671e6e" satisfied condition "success or failure" May 1 16:26:03.335: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-a7c9e3b5-4f90-4262-967f-c73405671e6e container projected-configmap-volume-test: STEP: delete the pod May 1 16:26:03.394: INFO: Waiting for pod pod-projected-configmaps-a7c9e3b5-4f90-4262-967f-c73405671e6e to disappear May 1 16:26:03.403: INFO: Pod pod-projected-configmaps-a7c9e3b5-4f90-4262-967f-c73405671e6e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:26:03.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2903" for this suite. May 1 16:26:09.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:26:09.476: INFO: namespace projected-2903 deletion completed in 6.068842613s • [SLOW TEST:12.256 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:26:09.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 1 16:26:09.599: INFO: Waiting up to 5m0s for pod "pod-32a84477-7f7e-47a4-a76e-055254b5fc26" in namespace "emptydir-485" to be "success or failure" May 1 16:26:09.600: INFO: Pod "pod-32a84477-7f7e-47a4-a76e-055254b5fc26": Phase="Pending", Reason="", readiness=false. Elapsed: 1.728123ms May 1 16:26:11.604: INFO: Pod "pod-32a84477-7f7e-47a4-a76e-055254b5fc26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005205845s May 1 16:26:13.608: INFO: Pod "pod-32a84477-7f7e-47a4-a76e-055254b5fc26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009543376s May 1 16:26:15.711: INFO: Pod "pod-32a84477-7f7e-47a4-a76e-055254b5fc26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.112282826s STEP: Saw pod success May 1 16:26:15.711: INFO: Pod "pod-32a84477-7f7e-47a4-a76e-055254b5fc26" satisfied condition "success or failure" May 1 16:26:15.713: INFO: Trying to get logs from node iruya-worker2 pod pod-32a84477-7f7e-47a4-a76e-055254b5fc26 container test-container: STEP: delete the pod May 1 16:26:16.054: INFO: Waiting for pod pod-32a84477-7f7e-47a4-a76e-055254b5fc26 to disappear May 1 16:26:16.151: INFO: Pod pod-32a84477-7f7e-47a4-a76e-055254b5fc26 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:26:16.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-485" for this suite. May 1 16:26:22.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:26:22.967: INFO: namespace emptydir-485 deletion completed in 6.733953581s • [SLOW TEST:13.491 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:26:22.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-cef59b69-f9eb-43aa-9783-cfee210f3096 STEP: Creating a pod to test consume secrets May 1 16:26:23.230: INFO: Waiting up to 5m0s for pod "pod-secrets-9953f5bd-9735-4385-98ec-9a4cb7c7b844" in namespace "secrets-8131" to be "success or failure" May 1 16:26:23.359: INFO: Pod "pod-secrets-9953f5bd-9735-4385-98ec-9a4cb7c7b844": Phase="Pending", Reason="", readiness=false. Elapsed: 128.432817ms May 1 16:26:25.363: INFO: Pod "pod-secrets-9953f5bd-9735-4385-98ec-9a4cb7c7b844": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132988715s May 1 16:26:27.372: INFO: Pod "pod-secrets-9953f5bd-9735-4385-98ec-9a4cb7c7b844": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141046406s May 1 16:26:29.376: INFO: Pod "pod-secrets-9953f5bd-9735-4385-98ec-9a4cb7c7b844": Phase="Running", Reason="", readiness=true. Elapsed: 6.145235048s May 1 16:26:31.379: INFO: Pod "pod-secrets-9953f5bd-9735-4385-98ec-9a4cb7c7b844": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.148574943s STEP: Saw pod success May 1 16:26:31.379: INFO: Pod "pod-secrets-9953f5bd-9735-4385-98ec-9a4cb7c7b844" satisfied condition "success or failure" May 1 16:26:31.381: INFO: Trying to get logs from node iruya-worker pod pod-secrets-9953f5bd-9735-4385-98ec-9a4cb7c7b844 container secret-volume-test: STEP: delete the pod May 1 16:26:31.404: INFO: Waiting for pod pod-secrets-9953f5bd-9735-4385-98ec-9a4cb7c7b844 to disappear May 1 16:26:31.415: INFO: Pod pod-secrets-9953f5bd-9735-4385-98ec-9a4cb7c7b844 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:26:31.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8131" for this suite. May 1 16:26:41.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:26:42.278: INFO: namespace secrets-8131 deletion completed in 10.859169821s STEP: Destroying namespace "secret-namespace-3281" for this suite. May 1 16:26:48.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:26:48.944: INFO: namespace secret-namespace-3281 deletion completed in 6.666073368s • [SLOW TEST:25.977 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:26:48.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:27:22.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5006" for this suite. May 1 16:27:28.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:27:28.313: INFO: namespace namespaces-5006 deletion completed in 6.168807905s STEP: Destroying namespace "nsdeletetest-1598" for this suite. May 1 16:27:28.315: INFO: Namespace nsdeletetest-1598 was already deleted STEP: Destroying namespace "nsdeletetest-392" for this suite. May 1 16:27:34.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:27:34.402: INFO: namespace nsdeletetest-392 deletion completed in 6.086853333s • [SLOW TEST:45.458 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:27:34.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components May 1 16:27:34.479: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 1 16:27:34.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1942' May 1 16:27:43.566: INFO: stderr: "" May 1 16:27:43.566: INFO: stdout: "service/redis-slave created\n" May 1 16:27:43.566: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 1 16:27:43.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1942' May 1 16:27:44.100: INFO: stderr: "" May 1 16:27:44.100: INFO: stdout: "service/redis-master created\n" May 1 16:27:44.100: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 1 16:27:44.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1942' May 1 16:27:44.653: INFO: stderr: "" May 1 16:27:44.653: INFO: stdout: "service/frontend created\n" May 1 16:27:44.653: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 1 16:27:44.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1942' May 1 16:27:45.136: INFO: stderr: "" May 1 16:27:45.136: INFO: stdout: "deployment.apps/frontend created\n" May 1 16:27:45.136: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 1 16:27:45.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1942' May 1 16:27:45.540: INFO: stderr: "" May 1 16:27:45.540: INFO: stdout: "deployment.apps/redis-master created\n" May 1 16:27:45.541: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 1 16:27:45.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1942' May 1 16:27:46.841: INFO: stderr: "" May 1 16:27:46.841: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app May 1 16:27:46.841: INFO: Waiting for all frontend pods to be Running. May 1 16:27:56.892: INFO: Waiting for frontend to serve content. May 1 16:27:58.539: INFO: Trying to add a new entry to the guestbook. May 1 16:27:58.705: INFO: Verifying that added entry can be retrieved. May 1 16:27:58.725: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources May 1 16:28:03.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1942' May 1 16:28:04.633: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 16:28:04.633: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 1 16:28:04.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1942' May 1 16:28:05.154: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 16:28:05.154: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 1 16:28:05.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1942' May 1 16:28:05.796: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 16:28:05.796: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 1 16:28:05.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1942' May 1 16:28:05.974: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 16:28:05.974: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 1 16:28:05.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1942' May 1 16:28:06.185: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 16:28:06.185: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 1 16:28:06.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1942' May 1 16:28:06.456: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 16:28:06.456: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:28:06.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1942" for this suite. May 1 16:28:52.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:28:53.009: INFO: namespace kubectl-1942 deletion completed in 46.265711969s • [SLOW TEST:78.607 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:28:53.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 1 16:29:01.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-437b7c27-c59a-463a-a503-b35f5cb374e6 -c busybox-main-container --namespace=emptydir-7371 -- cat /usr/share/volumeshare/shareddata.txt' May 1 16:29:02.083: INFO: stderr: "I0501 16:29:01.988953 2161 log.go:172] (0xc000a42630) (0xc0008bab40) Create stream\nI0501 16:29:01.989009 2161 log.go:172] (0xc000a42630) (0xc0008bab40) Stream added, broadcasting: 1\nI0501 16:29:01.992844 2161 log.go:172] (0xc000a42630) Reply frame received for 1\nI0501 16:29:01.992912 2161 log.go:172] (0xc000a42630) (0xc0008ba000) Create stream\nI0501 16:29:01.992940 2161 log.go:172] (0xc000a42630) (0xc0008ba000) Stream added, broadcasting: 3\nI0501 16:29:01.994474 2161 log.go:172] (0xc000a42630) Reply frame received for 3\nI0501 16:29:01.994508 2161 log.go:172] (0xc000a42630) (0xc00031c320) Create stream\nI0501 16:29:01.994518 2161 log.go:172] (0xc000a42630) (0xc00031c320) Stream added, broadcasting: 5\nI0501 16:29:01.995484 2161 log.go:172] (0xc000a42630) Reply frame received for 5\nI0501 16:29:02.076872 2161 log.go:172] (0xc000a42630) Data frame received for 5\nI0501 16:29:02.076911 2161 log.go:172] (0xc00031c320) (5) Data frame handling\nI0501 16:29:02.076934 2161 log.go:172] (0xc000a42630) Data frame received for 3\nI0501 16:29:02.076943 2161 log.go:172] (0xc0008ba000) (3) Data frame handling\nI0501 16:29:02.076951 2161 log.go:172] (0xc0008ba000) (3) Data frame sent\nI0501 16:29:02.076956 2161 log.go:172] (0xc000a42630) Data frame received for 3\nI0501 16:29:02.076966 2161 log.go:172] (0xc0008ba000) (3) Data frame handling\nI0501 16:29:02.078260 2161 log.go:172] (0xc000a42630) Data frame received for 1\nI0501 16:29:02.078279 2161 log.go:172] (0xc0008bab40) (1) Data frame handling\nI0501 16:29:02.078290 2161 log.go:172] (0xc0008bab40) (1) Data frame sent\nI0501 16:29:02.078301 2161 log.go:172] (0xc000a42630) (0xc0008bab40) Stream removed, broadcasting: 1\nI0501 16:29:02.078571 2161 log.go:172] (0xc000a42630) (0xc0008bab40) Stream removed, broadcasting: 1\nI0501 16:29:02.078585 2161 log.go:172] (0xc000a42630) (0xc0008ba000) Stream removed, broadcasting: 3\nI0501 16:29:02.078591 2161 log.go:172] (0xc000a42630) (0xc00031c320) Stream removed, broadcasting: 5\n" May 1 16:29:02.083: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:29:02.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7371" for this suite. May 1 16:29:12.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:29:12.344: INFO: namespace emptydir-7371 deletion completed in 10.222202467s • [SLOW TEST:19.334 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:29:12.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 16:29:13.451: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:29:19.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5233" for this suite. May 1 16:30:02.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:30:02.176: INFO: namespace pods-5233 deletion completed in 42.10779011s • [SLOW TEST:49.832 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:30:02.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 16:30:02.523: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 1 16:30:07.528: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 1 16:30:07.528: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 1 16:30:07.624: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2536,SelfLink:/apis/apps/v1/namespaces/deployment-2536/deployments/test-cleanup-deployment,UID:1ea56eda-ee6a-4eaa-a36a-6556b235d4df,ResourceVersion:8467244,Generation:1,CreationTimestamp:2020-05-01 16:30:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 1 16:30:07.650: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-2536,SelfLink:/apis/apps/v1/namespaces/deployment-2536/replicasets/test-cleanup-deployment-55bbcbc84c,UID:78dc30e4-ccf0-45c0-beb2-107e8acf93e9,ResourceVersion:8467246,Generation:1,CreationTimestamp:2020-05-01 16:30:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 1ea56eda-ee6a-4eaa-a36a-6556b235d4df 0xc002a955f7 0xc002a955f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 1 16:30:07.650: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 1 16:30:07.650: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-2536,SelfLink:/apis/apps/v1/namespaces/deployment-2536/replicasets/test-cleanup-controller,UID:f9ac6aa6-bbb3-43b4-a2f5-ad24ae7e622f,ResourceVersion:8467245,Generation:1,CreationTimestamp:2020-05-01 16:30:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 1ea56eda-ee6a-4eaa-a36a-6556b235d4df 0xc002a95507 0xc002a95508}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 1 16:30:07.722: INFO: Pod "test-cleanup-controller-mc86b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-mc86b,GenerateName:test-cleanup-controller-,Namespace:deployment-2536,SelfLink:/api/v1/namespaces/deployment-2536/pods/test-cleanup-controller-mc86b,UID:d15bf40c-971b-4cf9-af58-c61de58ed3f9,ResourceVersion:8467239,Generation:0,CreationTimestamp:2020-05-01 16:30:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller f9ac6aa6-bbb3-43b4-a2f5-ad24ae7e622f 0xc002a95ed7 0xc002a95ed8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xw7cj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xw7cj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xw7cj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a95f50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a95f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:30:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:30:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:30:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:30:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.77,StartTime:2020-05-01 16:30:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 16:30:06 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e3a0b51bbfb41481c95653d6578179e699a1ec667c19c46d208392c7127ee778}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:30:07.722: INFO: Pod "test-cleanup-deployment-55bbcbc84c-7vjhx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-7vjhx,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-2536,SelfLink:/api/v1/namespaces/deployment-2536/pods/test-cleanup-deployment-55bbcbc84c-7vjhx,UID:d69e2646-e2d9-41e5-a505-2819dff1f508,ResourceVersion:8467251,Generation:0,CreationTimestamp:2020-05-01 16:30:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 78dc30e4-ccf0-45c0-beb2-107e8acf93e9 0xc002316057 0xc002316058}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xw7cj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xw7cj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-xw7cj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023160d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023160f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:30:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:30:07.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2536" for this suite. May 1 16:30:15.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:30:15.856: INFO: namespace deployment-2536 deletion completed in 8.105829054s • [SLOW TEST:13.679 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:30:15.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 1 16:30:16.003: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6f92afea-db7d-4358-9803-ac24b9114efc" in namespace "downward-api-8691" to be "success or failure" May 1 16:30:16.032: INFO: Pod "downwardapi-volume-6f92afea-db7d-4358-9803-ac24b9114efc": Phase="Pending", Reason="", readiness=false. Elapsed: 28.779075ms May 1 16:30:18.236: INFO: Pod "downwardapi-volume-6f92afea-db7d-4358-9803-ac24b9114efc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232624177s May 1 16:30:20.240: INFO: Pod "downwardapi-volume-6f92afea-db7d-4358-9803-ac24b9114efc": Phase="Running", Reason="", readiness=true. Elapsed: 4.236501578s May 1 16:30:22.243: INFO: Pod "downwardapi-volume-6f92afea-db7d-4358-9803-ac24b9114efc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.239844572s STEP: Saw pod success May 1 16:30:22.243: INFO: Pod "downwardapi-volume-6f92afea-db7d-4358-9803-ac24b9114efc" satisfied condition "success or failure" May 1 16:30:22.246: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-6f92afea-db7d-4358-9803-ac24b9114efc container client-container: STEP: delete the pod May 1 16:30:22.314: INFO: Waiting for pod downwardapi-volume-6f92afea-db7d-4358-9803-ac24b9114efc to disappear May 1 16:30:22.361: INFO: Pod downwardapi-volume-6f92afea-db7d-4358-9803-ac24b9114efc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:30:22.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8691" for this suite. May 1 16:30:28.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:30:28.584: INFO: namespace downward-api-8691 deletion completed in 6.219743778s • [SLOW TEST:12.728 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:30:28.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 1 16:30:28.656: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12c53c25-dccc-4f01-a91d-79d2170ed859" in namespace "projected-8696" to be "success or failure" May 1 16:30:28.667: INFO: Pod "downwardapi-volume-12c53c25-dccc-4f01-a91d-79d2170ed859": Phase="Pending", Reason="", readiness=false. Elapsed: 10.930323ms May 1 16:30:30.670: INFO: Pod "downwardapi-volume-12c53c25-dccc-4f01-a91d-79d2170ed859": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01449013s May 1 16:30:32.751: INFO: Pod "downwardapi-volume-12c53c25-dccc-4f01-a91d-79d2170ed859": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094936178s May 1 16:30:34.754: INFO: Pod "downwardapi-volume-12c53c25-dccc-4f01-a91d-79d2170ed859": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.097812924s STEP: Saw pod success May 1 16:30:34.754: INFO: Pod "downwardapi-volume-12c53c25-dccc-4f01-a91d-79d2170ed859" satisfied condition "success or failure" May 1 16:30:34.755: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-12c53c25-dccc-4f01-a91d-79d2170ed859 container client-container: STEP: delete the pod May 1 16:30:34.955: INFO: Waiting for pod downwardapi-volume-12c53c25-dccc-4f01-a91d-79d2170ed859 to disappear May 1 16:30:35.266: INFO: Pod downwardapi-volume-12c53c25-dccc-4f01-a91d-79d2170ed859 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:30:35.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8696" for this suite. May 1 16:30:41.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:30:41.712: INFO: namespace projected-8696 deletion completed in 6.442811638s • [SLOW TEST:13.127 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:30:41.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 16:30:42.007: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.775335ms) May 1 16:30:42.010: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.92579ms) May 1 16:30:42.012: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.282814ms) May 1 16:30:42.015: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.63881ms) May 1 16:30:42.018: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.611652ms) May 1 16:30:42.020: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.50423ms) May 1 16:30:42.023: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.534238ms) May 1 16:30:42.026: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.923614ms) May 1 16:30:42.029: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.212864ms) May 1 16:30:42.032: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.583762ms) May 1 16:30:42.034: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.629823ms) May 1 16:30:42.038: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.237579ms) May 1 16:30:42.041: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.91471ms) May 1 16:30:42.044: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.333519ms) May 1 16:30:42.048: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.575742ms) May 1 16:30:42.051: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.18873ms) May 1 16:30:42.054: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.860334ms) May 1 16:30:42.056: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.703414ms) May 1 16:30:42.059: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.960999ms) May 1 16:30:42.062: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.072994ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:30:42.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3855" for this suite. May 1 16:30:48.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:30:48.237: INFO: namespace proxy-3855 deletion completed in 6.172047499s • [SLOW TEST:6.524 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:30:48.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 1 16:30:48.491: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:31:01.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5716" for this suite. May 1 16:31:08.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:31:08.303: INFO: namespace init-container-5716 deletion completed in 6.622901587s • [SLOW TEST:20.065 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:31:08.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments May 1 16:31:08.758: INFO: Waiting up to 5m0s for pod "client-containers-3b68974f-390e-43af-9a1f-292bfc1ea7a0" in namespace "containers-9767" to be "success or failure" May 1 16:31:08.955: INFO: Pod "client-containers-3b68974f-390e-43af-9a1f-292bfc1ea7a0": Phase="Pending", Reason="", readiness=false. Elapsed: 196.740148ms May 1 16:31:11.098: INFO: Pod "client-containers-3b68974f-390e-43af-9a1f-292bfc1ea7a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339851011s May 1 16:31:13.101: INFO: Pod "client-containers-3b68974f-390e-43af-9a1f-292bfc1ea7a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342818742s May 1 16:31:15.165: INFO: Pod "client-containers-3b68974f-390e-43af-9a1f-292bfc1ea7a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.406017862s STEP: Saw pod success May 1 16:31:15.165: INFO: Pod "client-containers-3b68974f-390e-43af-9a1f-292bfc1ea7a0" satisfied condition "success or failure" May 1 16:31:15.168: INFO: Trying to get logs from node iruya-worker pod client-containers-3b68974f-390e-43af-9a1f-292bfc1ea7a0 container test-container: STEP: delete the pod May 1 16:31:15.231: INFO: Waiting for pod client-containers-3b68974f-390e-43af-9a1f-292bfc1ea7a0 to disappear May 1 16:31:15.296: INFO: Pod client-containers-3b68974f-390e-43af-9a1f-292bfc1ea7a0 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:31:15.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9767" for this suite. May 1 16:31:21.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:31:22.057: INFO: namespace containers-9767 deletion completed in 6.757765961s • [SLOW TEST:13.754 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:31:22.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3849.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3849.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3849.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3849.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 16:31:36.666: INFO: DNS probes using dns-test-90c56609-4fcc-460c-8a96-a1230027f44b succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3849.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3849.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3849.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3849.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 16:31:53.462: INFO: File wheezy_udp@dns-test-service-3.dns-3849.svc.cluster.local from pod dns-3849/dns-test-64a5d432-5355-4ce7-ab4b-37abd91d1dde contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 16:31:53.466: INFO: File jessie_udp@dns-test-service-3.dns-3849.svc.cluster.local from pod dns-3849/dns-test-64a5d432-5355-4ce7-ab4b-37abd91d1dde contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 16:31:53.466: INFO: Lookups using dns-3849/dns-test-64a5d432-5355-4ce7-ab4b-37abd91d1dde failed for: [wheezy_udp@dns-test-service-3.dns-3849.svc.cluster.local jessie_udp@dns-test-service-3.dns-3849.svc.cluster.local] May 1 16:31:58.471: INFO: File wheezy_udp@dns-test-service-3.dns-3849.svc.cluster.local from pod dns-3849/dns-test-64a5d432-5355-4ce7-ab4b-37abd91d1dde contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 16:31:58.474: INFO: File jessie_udp@dns-test-service-3.dns-3849.svc.cluster.local from pod dns-3849/dns-test-64a5d432-5355-4ce7-ab4b-37abd91d1dde contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 16:31:58.474: INFO: Lookups using dns-3849/dns-test-64a5d432-5355-4ce7-ab4b-37abd91d1dde failed for: [wheezy_udp@dns-test-service-3.dns-3849.svc.cluster.local jessie_udp@dns-test-service-3.dns-3849.svc.cluster.local] May 1 16:32:03.613: INFO: DNS probes using dns-test-64a5d432-5355-4ce7-ab4b-37abd91d1dde succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3849.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3849.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3849.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3849.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 16:32:15.085: INFO: DNS probes using dns-test-9d59741b-3bd4-43dd-a72d-ab63e42adcdd succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:32:15.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3849" for this suite. May 1 16:32:23.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:32:23.595: INFO: namespace dns-3849 deletion completed in 8.320656097s • [SLOW TEST:61.538 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:32:23.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 16:32:23.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-1377' May 1 16:32:24.107: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 1 16:32:24.107: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 May 1 16:32:26.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1377' May 1 16:32:26.347: INFO: stderr: "" May 1 16:32:26.347: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:32:26.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1377" for this suite. May 1 16:32:32.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:32:32.599: INFO: namespace kubectl-1377 deletion completed in 6.249052382s • [SLOW TEST:9.004 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:32:32.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2399 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2399 STEP: Creating statefulset with conflicting port in namespace statefulset-2399 STEP: Waiting until pod test-pod will start running in namespace statefulset-2399 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2399 May 1 16:32:38.848: INFO: Observed stateful pod in namespace: statefulset-2399, name: ss-0, uid: b05d98f4-293b-48fc-900f-4154a4a6f124, status phase: Pending. Waiting for statefulset controller to delete. May 1 16:32:42.147: INFO: Observed stateful pod in namespace: statefulset-2399, name: ss-0, uid: b05d98f4-293b-48fc-900f-4154a4a6f124, status phase: Failed. Waiting for statefulset controller to delete. May 1 16:32:42.191: INFO: Observed stateful pod in namespace: statefulset-2399, name: ss-0, uid: b05d98f4-293b-48fc-900f-4154a4a6f124, status phase: Failed. Waiting for statefulset controller to delete. May 1 16:32:42.222: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2399 STEP: Removing pod with conflicting port in namespace statefulset-2399 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2399 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 1 16:32:48.520: INFO: Deleting all statefulset in ns statefulset-2399 May 1 16:32:48.524: INFO: Scaling statefulset ss to 0 May 1 16:33:08.543: INFO: Waiting for statefulset status.replicas updated to 0 May 1 16:33:08.545: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:33:08.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2399" for this suite. May 1 16:33:14.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:33:14.695: INFO: namespace statefulset-2399 deletion completed in 6.129592207s • [SLOW TEST:42.096 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:33:14.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:33:14.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4371" for this suite. May 1 16:33:20.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:33:20.944: INFO: namespace kubelet-test-4371 deletion completed in 6.083242889s • [SLOW TEST:6.248 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:33:20.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-79efbe3d-8712-4078-89a7-72d869c36203 STEP: Creating a pod to test consume configMaps May 1 16:33:21.055: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4c8cb45d-12a6-44cd-8fb2-2db86bd4b3e9" in namespace "projected-1592" to be "success or failure" May 1 16:33:21.073: INFO: Pod "pod-projected-configmaps-4c8cb45d-12a6-44cd-8fb2-2db86bd4b3e9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.603162ms May 1 16:33:23.077: INFO: Pod "pod-projected-configmaps-4c8cb45d-12a6-44cd-8fb2-2db86bd4b3e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022745433s May 1 16:33:25.082: INFO: Pod "pod-projected-configmaps-4c8cb45d-12a6-44cd-8fb2-2db86bd4b3e9": Phase="Running", Reason="", readiness=true. Elapsed: 4.027019983s May 1 16:33:27.086: INFO: Pod "pod-projected-configmaps-4c8cb45d-12a6-44cd-8fb2-2db86bd4b3e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031661522s STEP: Saw pod success May 1 16:33:27.086: INFO: Pod "pod-projected-configmaps-4c8cb45d-12a6-44cd-8fb2-2db86bd4b3e9" satisfied condition "success or failure" May 1 16:33:27.089: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-4c8cb45d-12a6-44cd-8fb2-2db86bd4b3e9 container projected-configmap-volume-test: STEP: delete the pod May 1 16:33:27.110: INFO: Waiting for pod pod-projected-configmaps-4c8cb45d-12a6-44cd-8fb2-2db86bd4b3e9 to disappear May 1 16:33:27.114: INFO: Pod pod-projected-configmaps-4c8cb45d-12a6-44cd-8fb2-2db86bd4b3e9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:33:27.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1592" for this suite. May 1 16:33:33.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:33:33.224: INFO: namespace projected-1592 deletion completed in 6.106451323s • [SLOW TEST:12.279 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:33:33.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 16:33:33.434: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 1 16:33:33.563: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:33.630: INFO: Number of nodes with available pods: 0 May 1 16:33:33.630: INFO: Node iruya-worker is running more than one daemon pod May 1 16:33:34.784: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:34.787: INFO: Number of nodes with available pods: 0 May 1 16:33:34.787: INFO: Node iruya-worker is running more than one daemon pod May 1 16:33:35.966: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:35.970: INFO: Number of nodes with available pods: 0 May 1 16:33:35.970: INFO: Node iruya-worker is running more than one daemon pod May 1 16:33:36.644: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:36.647: INFO: Number of nodes with available pods: 0 May 1 16:33:36.647: INFO: Node iruya-worker is running more than one daemon pod May 1 16:33:37.634: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:37.637: INFO: Number of nodes with available pods: 0 May 1 16:33:37.637: INFO: Node iruya-worker is running more than one daemon pod May 1 16:33:38.647: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:38.650: INFO: Number of nodes with available pods: 0 May 1 16:33:38.650: INFO: Node iruya-worker is running more than one daemon pod May 1 16:33:39.634: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:39.637: INFO: Number of nodes with available pods: 1 May 1 16:33:39.638: INFO: Node iruya-worker is running more than one daemon pod May 1 16:33:40.671: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:40.675: INFO: Number of nodes with available pods: 1 May 1 16:33:40.675: INFO: Node iruya-worker is running more than one daemon pod May 1 16:33:41.635: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:41.638: INFO: Number of nodes with available pods: 2 May 1 16:33:41.638: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 1 16:33:42.025: INFO: Wrong image for pod: daemon-set-g84cn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:42.025: INFO: Wrong image for pod: daemon-set-st74k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:42.282: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:43.287: INFO: Wrong image for pod: daemon-set-g84cn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:43.287: INFO: Wrong image for pod: daemon-set-st74k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:43.291: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:44.286: INFO: Wrong image for pod: daemon-set-g84cn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:44.286: INFO: Wrong image for pod: daemon-set-st74k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:44.290: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:45.287: INFO: Wrong image for pod: daemon-set-g84cn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:45.287: INFO: Pod daemon-set-g84cn is not available May 1 16:33:45.287: INFO: Wrong image for pod: daemon-set-st74k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:45.291: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:46.287: INFO: Wrong image for pod: daemon-set-g84cn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:46.287: INFO: Pod daemon-set-g84cn is not available May 1 16:33:46.287: INFO: Wrong image for pod: daemon-set-st74k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:46.292: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:47.341: INFO: Wrong image for pod: daemon-set-g84cn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:47.341: INFO: Pod daemon-set-g84cn is not available May 1 16:33:47.341: INFO: Wrong image for pod: daemon-set-st74k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:47.345: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:48.287: INFO: Wrong image for pod: daemon-set-g84cn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:48.287: INFO: Pod daemon-set-g84cn is not available May 1 16:33:48.287: INFO: Wrong image for pod: daemon-set-st74k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:48.291: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:49.287: INFO: Wrong image for pod: daemon-set-g84cn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:49.287: INFO: Pod daemon-set-g84cn is not available May 1 16:33:49.287: INFO: Wrong image for pod: daemon-set-st74k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:49.291: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:50.287: INFO: Wrong image for pod: daemon-set-g84cn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:50.287: INFO: Pod daemon-set-g84cn is not available May 1 16:33:50.287: INFO: Wrong image for pod: daemon-set-st74k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:50.291: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:51.287: INFO: Wrong image for pod: daemon-set-g84cn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:51.287: INFO: Pod daemon-set-g84cn is not available May 1 16:33:51.287: INFO: Wrong image for pod: daemon-set-st74k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:51.291: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:52.286: INFO: Pod daemon-set-rbkf4 is not available May 1 16:33:52.286: INFO: Wrong image for pod: daemon-set-st74k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:52.289: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:53.286: INFO: Pod daemon-set-rbkf4 is not available May 1 16:33:53.286: INFO: Wrong image for pod: daemon-set-st74k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:53.289: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:54.287: INFO: Pod daemon-set-rbkf4 is not available May 1 16:33:54.287: INFO: Wrong image for pod: daemon-set-st74k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:54.291: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:55.286: INFO: Pod daemon-set-rbkf4 is not available May 1 16:33:55.286: INFO: Wrong image for pod: daemon-set-st74k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:55.290: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:56.286: INFO: Wrong image for pod: daemon-set-st74k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:56.290: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:57.287: INFO: Wrong image for pod: daemon-set-st74k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 16:33:57.287: INFO: Pod daemon-set-st74k is not available May 1 16:33:57.292: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:58.323: INFO: Pod daemon-set-c2bhh is not available May 1 16:33:58.335: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 1 16:33:58.340: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:58.343: INFO: Number of nodes with available pods: 1 May 1 16:33:58.343: INFO: Node iruya-worker2 is running more than one daemon pod May 1 16:33:59.348: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:33:59.350: INFO: Number of nodes with available pods: 1 May 1 16:33:59.350: INFO: Node iruya-worker2 is running more than one daemon pod May 1 16:34:00.348: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:34:00.351: INFO: Number of nodes with available pods: 1 May 1 16:34:00.351: INFO: Node iruya-worker2 is running more than one daemon pod May 1 16:34:01.348: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:34:01.351: INFO: Number of nodes with available pods: 1 May 1 16:34:01.351: INFO: Node iruya-worker2 is running more than one daemon pod May 1 16:34:02.348: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 16:34:02.352: INFO: Number of nodes with available pods: 2 May 1 16:34:02.352: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4082, will wait for the garbage collector to delete the pods May 1 16:34:02.424: INFO: Deleting DaemonSet.extensions daemon-set took: 6.467054ms May 1 16:34:02.725: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.453227ms May 1 16:34:12.233: INFO: Number of nodes with available pods: 0 May 1 16:34:12.233: INFO: Number of running nodes: 0, number of available pods: 0 May 1 16:34:12.235: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4082/daemonsets","resourceVersion":"8468234"},"items":null} May 1 16:34:12.238: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4082/pods","resourceVersion":"8468234"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:34:12.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4082" for this suite. May 1 16:34:18.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:34:18.354: INFO: namespace daemonsets-4082 deletion completed in 6.10298484s • [SLOW TEST:45.130 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:34:18.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 1 16:34:18.404: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:34:31.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1132" for this suite. May 1 16:34:37.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:34:37.966: INFO: namespace pods-1132 deletion completed in 6.102212617s • [SLOW TEST:19.612 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:34:37.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-v7wm STEP: Creating a pod to test atomic-volume-subpath May 1 16:34:38.073: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-v7wm" in namespace "subpath-6804" to be "success or failure" May 1 16:34:38.090: INFO: Pod "pod-subpath-test-configmap-v7wm": Phase="Pending", Reason="", readiness=false. Elapsed: 16.514307ms May 1 16:34:40.094: INFO: Pod "pod-subpath-test-configmap-v7wm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020724492s May 1 16:34:42.099: INFO: Pod "pod-subpath-test-configmap-v7wm": Phase="Running", Reason="", readiness=true. Elapsed: 4.0254265s May 1 16:34:44.103: INFO: Pod "pod-subpath-test-configmap-v7wm": Phase="Running", Reason="", readiness=true. Elapsed: 6.029961685s May 1 16:34:46.118: INFO: Pod "pod-subpath-test-configmap-v7wm": Phase="Running", Reason="", readiness=true. Elapsed: 8.044713548s May 1 16:34:48.122: INFO: Pod "pod-subpath-test-configmap-v7wm": Phase="Running", Reason="", readiness=true. Elapsed: 10.048591338s May 1 16:34:50.126: INFO: Pod "pod-subpath-test-configmap-v7wm": Phase="Running", Reason="", readiness=true. Elapsed: 12.052931166s May 1 16:34:52.130: INFO: Pod "pod-subpath-test-configmap-v7wm": Phase="Running", Reason="", readiness=true. Elapsed: 14.056776649s May 1 16:34:54.135: INFO: Pod "pod-subpath-test-configmap-v7wm": Phase="Running", Reason="", readiness=true. Elapsed: 16.061505914s May 1 16:34:56.155: INFO: Pod "pod-subpath-test-configmap-v7wm": Phase="Running", Reason="", readiness=true. Elapsed: 18.081956639s May 1 16:34:58.159: INFO: Pod "pod-subpath-test-configmap-v7wm": Phase="Running", Reason="", readiness=true. Elapsed: 20.085960021s May 1 16:35:00.163: INFO: Pod "pod-subpath-test-configmap-v7wm": Phase="Running", Reason="", readiness=true. Elapsed: 22.09000288s May 1 16:35:02.167: INFO: Pod "pod-subpath-test-configmap-v7wm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.093642501s STEP: Saw pod success May 1 16:35:02.167: INFO: Pod "pod-subpath-test-configmap-v7wm" satisfied condition "success or failure" May 1 16:35:02.170: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-v7wm container test-container-subpath-configmap-v7wm: STEP: delete the pod May 1 16:35:02.207: INFO: Waiting for pod pod-subpath-test-configmap-v7wm to disappear May 1 16:35:02.218: INFO: Pod pod-subpath-test-configmap-v7wm no longer exists STEP: Deleting pod pod-subpath-test-configmap-v7wm May 1 16:35:02.218: INFO: Deleting pod "pod-subpath-test-configmap-v7wm" in namespace "subpath-6804" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:35:02.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6804" for this suite. May 1 16:35:08.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:35:08.306: INFO: namespace subpath-6804 deletion completed in 6.083008894s • [SLOW TEST:30.339 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:35:08.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 16:35:08.709: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:35:10.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8028" for this suite. May 1 16:35:16.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:35:16.251: INFO: namespace custom-resource-definition-8028 deletion completed in 6.101621697s • [SLOW TEST:7.945 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:35:16.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 16:35:16.282: INFO: Creating deployment "test-recreate-deployment" May 1 16:35:16.287: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 1 16:35:16.324: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 1 16:35:18.331: INFO: Waiting deployment "test-recreate-deployment" to complete May 1 16:35:18.334: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947716, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947716, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947716, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947716, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 16:35:20.338: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 1 16:35:20.345: INFO: Updating deployment test-recreate-deployment May 1 16:35:20.345: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 1 16:35:20.546: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-6269,SelfLink:/apis/apps/v1/namespaces/deployment-6269/deployments/test-recreate-deployment,UID:6c921525-c449-48e2-b5fd-6c4bc4dcc4f8,ResourceVersion:8468516,Generation:2,CreationTimestamp:2020-05-01 16:35:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-01 16:35:20 +0000 UTC 2020-05-01 16:35:20 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-01 16:35:20 +0000 UTC 2020-05-01 16:35:16 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 1 16:35:20.550: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-6269,SelfLink:/apis/apps/v1/namespaces/deployment-6269/replicasets/test-recreate-deployment-5c8c9cc69d,UID:9825a686-5c13-4e58-8841-ec40a70d9e88,ResourceVersion:8468512,Generation:1,CreationTimestamp:2020-05-01 16:35:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6c921525-c449-48e2-b5fd-6c4bc4dcc4f8 0xc00217c1c7 0xc00217c1c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 1 16:35:20.550: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 1 16:35:20.550: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-6269,SelfLink:/apis/apps/v1/namespaces/deployment-6269/replicasets/test-recreate-deployment-6df85df6b9,UID:c274d642-3c99-49e7-8c3c-eac378524d4f,ResourceVersion:8468504,Generation:2,CreationTimestamp:2020-05-01 16:35:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6c921525-c449-48e2-b5fd-6c4bc4dcc4f8 0xc00217c447 0xc00217c448}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 1 16:35:20.587: INFO: Pod "test-recreate-deployment-5c8c9cc69d-89vpc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-89vpc,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-6269,SelfLink:/api/v1/namespaces/deployment-6269/pods/test-recreate-deployment-5c8c9cc69d-89vpc,UID:e84c9c8e-ffd9-4358-b54c-0903b0111c6e,ResourceVersion:8468517,Generation:0,CreationTimestamp:2020-05-01 16:35:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 9825a686-5c13-4e58-8841-ec40a70d9e88 0xc00217d757 0xc00217d758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j7klj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j7klj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j7klj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00217d7d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00217d7f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:35:20 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-01 16:35:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:35:20.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6269" for this suite. May 1 16:35:30.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:35:30.682: INFO: namespace deployment-6269 deletion completed in 10.091461244s • [SLOW TEST:14.431 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:35:30.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2906 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2906 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2906 May 1 16:35:30.798: INFO: Found 0 stateful pods, waiting for 1 May 1 16:35:40.803: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 1 16:35:40.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2906 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 16:35:41.071: INFO: stderr: "I0501 16:35:40.948655 2221 log.go:172] (0xc0007568f0) (0xc000346820) Create stream\nI0501 16:35:40.948711 2221 log.go:172] (0xc0007568f0) (0xc000346820) Stream added, broadcasting: 1\nI0501 16:35:40.950824 2221 log.go:172] (0xc0007568f0) Reply frame received for 1\nI0501 16:35:40.950863 2221 log.go:172] (0xc0007568f0) (0xc0003468c0) Create stream\nI0501 16:35:40.950873 2221 log.go:172] (0xc0007568f0) (0xc0003468c0) Stream added, broadcasting: 3\nI0501 16:35:40.951693 2221 log.go:172] (0xc0007568f0) Reply frame received for 3\nI0501 16:35:40.951724 2221 log.go:172] (0xc0007568f0) (0xc000346960) Create stream\nI0501 16:35:40.951734 2221 log.go:172] (0xc0007568f0) (0xc000346960) Stream added, broadcasting: 5\nI0501 16:35:40.952929 2221 log.go:172] (0xc0007568f0) Reply frame received for 5\nI0501 16:35:41.014789 2221 log.go:172] (0xc0007568f0) Data frame received for 5\nI0501 16:35:41.014928 2221 log.go:172] (0xc000346960) (5) Data frame handling\nI0501 16:35:41.015014 2221 log.go:172] (0xc000346960) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0501 16:35:41.065567 2221 log.go:172] (0xc0007568f0) Data frame received for 5\nI0501 16:35:41.065598 2221 log.go:172] (0xc000346960) (5) Data frame handling\nI0501 16:35:41.065628 2221 log.go:172] (0xc0007568f0) Data frame received for 3\nI0501 16:35:41.065662 2221 log.go:172] (0xc0003468c0) (3) Data frame handling\nI0501 16:35:41.065693 2221 log.go:172] (0xc0003468c0) (3) Data frame sent\nI0501 16:35:41.065717 2221 log.go:172] (0xc0007568f0) Data frame received for 3\nI0501 16:35:41.065733 2221 log.go:172] (0xc0003468c0) (3) Data frame handling\nI0501 16:35:41.067148 2221 log.go:172] (0xc0007568f0) Data frame received for 1\nI0501 16:35:41.067185 2221 log.go:172] (0xc000346820) (1) Data frame handling\nI0501 16:35:41.067203 2221 log.go:172] (0xc000346820) (1) Data frame sent\nI0501 16:35:41.067223 2221 log.go:172] (0xc0007568f0) (0xc000346820) Stream removed, broadcasting: 1\nI0501 16:35:41.067321 2221 log.go:172] (0xc0007568f0) Go away received\nI0501 16:35:41.067513 2221 log.go:172] (0xc0007568f0) (0xc000346820) Stream removed, broadcasting: 1\nI0501 16:35:41.067529 2221 log.go:172] (0xc0007568f0) (0xc0003468c0) Stream removed, broadcasting: 3\nI0501 16:35:41.067538 2221 log.go:172] (0xc0007568f0) (0xc000346960) Stream removed, broadcasting: 5\n" May 1 16:35:41.071: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 16:35:41.071: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 16:35:41.075: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 1 16:35:51.079: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 1 16:35:51.079: INFO: Waiting for statefulset status.replicas updated to 0 May 1 16:35:51.096: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999562s May 1 16:35:52.169: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.997184624s May 1 16:35:53.173: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.924729367s May 1 16:35:54.177: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.921039162s May 1 16:35:55.181: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.916263331s May 1 16:35:56.205: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.912284241s May 1 16:35:57.241: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.888562572s May 1 16:35:58.246: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.852297037s May 1 16:35:59.250: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.847428716s May 1 16:36:00.254: INFO: Verifying statefulset ss doesn't scale past 1 for another 843.529043ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2906 May 1 16:36:01.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2906 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 16:36:01.597: INFO: stderr: "I0501 16:36:01.395067 2241 log.go:172] (0xc000118dc0) (0xc0006e8820) Create stream\nI0501 16:36:01.395117 2241 log.go:172] (0xc000118dc0) (0xc0006e8820) Stream added, broadcasting: 1\nI0501 16:36:01.397566 2241 log.go:172] (0xc000118dc0) Reply frame received for 1\nI0501 16:36:01.397607 2241 log.go:172] (0xc000118dc0) (0xc0005f83c0) Create stream\nI0501 16:36:01.397617 2241 log.go:172] (0xc000118dc0) (0xc0005f83c0) Stream added, broadcasting: 3\nI0501 16:36:01.398496 2241 log.go:172] (0xc000118dc0) Reply frame received for 3\nI0501 16:36:01.398521 2241 log.go:172] (0xc000118dc0) (0xc0005f8460) Create stream\nI0501 16:36:01.398528 2241 log.go:172] (0xc000118dc0) (0xc0005f8460) Stream added, broadcasting: 5\nI0501 16:36:01.399477 2241 log.go:172] (0xc000118dc0) Reply frame received for 5\nI0501 16:36:01.454081 2241 log.go:172] (0xc000118dc0) Data frame received for 5\nI0501 16:36:01.454112 2241 log.go:172] (0xc0005f8460) (5) Data frame handling\nI0501 16:36:01.454130 2241 log.go:172] (0xc0005f8460) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0501 16:36:01.590668 2241 log.go:172] (0xc000118dc0) Data frame received for 3\nI0501 16:36:01.590705 2241 log.go:172] (0xc0005f83c0) (3) Data frame handling\nI0501 16:36:01.590713 2241 log.go:172] (0xc0005f83c0) (3) Data frame sent\nI0501 16:36:01.590718 2241 log.go:172] (0xc000118dc0) Data frame received for 3\nI0501 16:36:01.590721 2241 log.go:172] (0xc0005f83c0) (3) Data frame handling\nI0501 16:36:01.590740 2241 log.go:172] (0xc000118dc0) Data frame received for 5\nI0501 16:36:01.590780 2241 log.go:172] (0xc0005f8460) (5) Data frame handling\nI0501 16:36:01.592741 2241 log.go:172] (0xc000118dc0) Data frame received for 1\nI0501 16:36:01.592754 2241 log.go:172] (0xc0006e8820) (1) Data frame handling\nI0501 16:36:01.592769 2241 log.go:172] (0xc0006e8820) (1) Data frame sent\nI0501 16:36:01.592785 2241 log.go:172] (0xc000118dc0) (0xc0006e8820) Stream removed, broadcasting: 1\nI0501 16:36:01.593018 2241 log.go:172] (0xc000118dc0) Go away received\nI0501 16:36:01.593066 2241 log.go:172] (0xc000118dc0) (0xc0006e8820) Stream removed, broadcasting: 1\nI0501 16:36:01.593082 2241 log.go:172] (0xc000118dc0) (0xc0005f83c0) Stream removed, broadcasting: 3\nI0501 16:36:01.593090 2241 log.go:172] (0xc000118dc0) (0xc0005f8460) Stream removed, broadcasting: 5\n" May 1 16:36:01.597: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 16:36:01.597: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 16:36:01.601: INFO: Found 1 stateful pods, waiting for 3 May 1 16:36:11.606: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 1 16:36:11.606: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 1 16:36:11.606: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 1 16:36:11.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2906 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 16:36:11.863: INFO: stderr: "I0501 16:36:11.763071 2261 log.go:172] (0xc000938420) (0xc000859900) Create stream\nI0501 16:36:11.763128 2261 log.go:172] (0xc000938420) (0xc000859900) Stream added, broadcasting: 1\nI0501 16:36:11.765669 2261 log.go:172] (0xc000938420) Reply frame received for 1\nI0501 16:36:11.766059 2261 log.go:172] (0xc000938420) (0xc00053e000) Create stream\nI0501 16:36:11.766083 2261 log.go:172] (0xc000938420) (0xc00053e000) Stream added, broadcasting: 3\nI0501 16:36:11.767838 2261 log.go:172] (0xc000938420) Reply frame received for 3\nI0501 16:36:11.767914 2261 log.go:172] (0xc000938420) (0xc000968000) Create stream\nI0501 16:36:11.767935 2261 log.go:172] (0xc000938420) (0xc000968000) Stream added, broadcasting: 5\nI0501 16:36:11.769238 2261 log.go:172] (0xc000938420) Reply frame received for 5\nI0501 16:36:11.855124 2261 log.go:172] (0xc000938420) Data frame received for 3\nI0501 16:36:11.855172 2261 log.go:172] (0xc00053e000) (3) Data frame handling\nI0501 16:36:11.855193 2261 log.go:172] (0xc00053e000) (3) Data frame sent\nI0501 16:36:11.855217 2261 log.go:172] (0xc000938420) Data frame received for 5\nI0501 16:36:11.855235 2261 log.go:172] (0xc000968000) (5) Data frame handling\nI0501 16:36:11.855266 2261 log.go:172] (0xc000968000) (5) Data frame sent\nI0501 16:36:11.855289 2261 log.go:172] (0xc000938420) Data frame received for 5\nI0501 16:36:11.855299 2261 log.go:172] (0xc000968000) (5) Data frame handling\nI0501 16:36:11.855310 2261 log.go:172] (0xc000938420) Data frame received for 3\nI0501 16:36:11.855320 2261 log.go:172] (0xc00053e000) (3) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0501 16:36:11.857434 2261 log.go:172] (0xc000938420) Data frame received for 1\nI0501 16:36:11.857464 2261 log.go:172] (0xc000859900) (1) Data frame handling\nI0501 16:36:11.857498 2261 log.go:172] (0xc000859900) (1) Data frame sent\nI0501 16:36:11.857901 2261 log.go:172] (0xc000938420) (0xc000859900) Stream removed, broadcasting: 1\nI0501 16:36:11.857943 2261 log.go:172] (0xc000938420) Go away received\nI0501 16:36:11.858292 2261 log.go:172] (0xc000938420) (0xc000859900) Stream removed, broadcasting: 1\nI0501 16:36:11.858316 2261 log.go:172] (0xc000938420) (0xc00053e000) Stream removed, broadcasting: 3\nI0501 16:36:11.858330 2261 log.go:172] (0xc000938420) (0xc000968000) Stream removed, broadcasting: 5\n" May 1 16:36:11.863: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 16:36:11.863: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 16:36:11.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2906 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 16:36:12.118: INFO: stderr: "I0501 16:36:11.980365 2282 log.go:172] (0xc000828370) (0xc00062c8c0) Create stream\nI0501 16:36:11.980418 2282 log.go:172] (0xc000828370) (0xc00062c8c0) Stream added, broadcasting: 1\nI0501 16:36:11.982876 2282 log.go:172] (0xc000828370) Reply frame received for 1\nI0501 16:36:11.982924 2282 log.go:172] (0xc000828370) (0xc00062c960) Create stream\nI0501 16:36:11.982940 2282 log.go:172] (0xc000828370) (0xc00062c960) Stream added, broadcasting: 3\nI0501 16:36:11.983693 2282 log.go:172] (0xc000828370) Reply frame received for 3\nI0501 16:36:11.983726 2282 log.go:172] (0xc000828370) (0xc00062ca00) Create stream\nI0501 16:36:11.983734 2282 log.go:172] (0xc000828370) (0xc00062ca00) Stream added, broadcasting: 5\nI0501 16:36:11.984411 2282 log.go:172] (0xc000828370) Reply frame received for 5\nI0501 16:36:12.045501 2282 log.go:172] (0xc000828370) Data frame received for 5\nI0501 16:36:12.045525 2282 log.go:172] (0xc00062ca00) (5) Data frame handling\nI0501 16:36:12.045539 2282 log.go:172] (0xc00062ca00) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0501 16:36:12.108504 2282 log.go:172] (0xc000828370) Data frame received for 3\nI0501 16:36:12.108541 2282 log.go:172] (0xc00062c960) (3) Data frame handling\nI0501 16:36:12.108574 2282 log.go:172] (0xc00062c960) (3) Data frame sent\nI0501 16:36:12.108587 2282 log.go:172] (0xc000828370) Data frame received for 3\nI0501 16:36:12.108600 2282 log.go:172] (0xc00062c960) (3) Data frame handling\nI0501 16:36:12.109026 2282 log.go:172] (0xc000828370) Data frame received for 5\nI0501 16:36:12.109056 2282 log.go:172] (0xc00062ca00) (5) Data frame handling\nI0501 16:36:12.111981 2282 log.go:172] (0xc000828370) Data frame received for 1\nI0501 16:36:12.112012 2282 log.go:172] (0xc00062c8c0) (1) Data frame handling\nI0501 16:36:12.112050 2282 log.go:172] (0xc00062c8c0) (1) Data frame sent\nI0501 16:36:12.112079 2282 log.go:172] (0xc000828370) (0xc00062c8c0) Stream removed, broadcasting: 1\nI0501 16:36:12.112261 2282 log.go:172] (0xc000828370) Go away received\nI0501 16:36:12.112553 2282 log.go:172] (0xc000828370) (0xc00062c8c0) Stream removed, broadcasting: 1\nI0501 16:36:12.112597 2282 log.go:172] (0xc000828370) (0xc00062c960) Stream removed, broadcasting: 3\nI0501 16:36:12.112626 2282 log.go:172] (0xc000828370) (0xc00062ca00) Stream removed, broadcasting: 5\n" May 1 16:36:12.118: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 16:36:12.118: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 16:36:12.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2906 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 16:36:12.401: INFO: stderr: "I0501 16:36:12.238637 2302 log.go:172] (0xc000a90630) (0xc000448960) Create stream\nI0501 16:36:12.238696 2302 log.go:172] (0xc000a90630) (0xc000448960) Stream added, broadcasting: 1\nI0501 16:36:12.242206 2302 log.go:172] (0xc000a90630) Reply frame received for 1\nI0501 16:36:12.242258 2302 log.go:172] (0xc000a90630) (0xc000448000) Create stream\nI0501 16:36:12.242272 2302 log.go:172] (0xc000a90630) (0xc000448000) Stream added, broadcasting: 3\nI0501 16:36:12.243385 2302 log.go:172] (0xc000a90630) Reply frame received for 3\nI0501 16:36:12.243419 2302 log.go:172] (0xc000a90630) (0xc0005981e0) Create stream\nI0501 16:36:12.243430 2302 log.go:172] (0xc000a90630) (0xc0005981e0) Stream added, broadcasting: 5\nI0501 16:36:12.244530 2302 log.go:172] (0xc000a90630) Reply frame received for 5\nI0501 16:36:12.310107 2302 log.go:172] (0xc000a90630) Data frame received for 5\nI0501 16:36:12.310133 2302 log.go:172] (0xc0005981e0) (5) Data frame handling\nI0501 16:36:12.310149 2302 log.go:172] (0xc0005981e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0501 16:36:12.391060 2302 log.go:172] (0xc000a90630) Data frame received for 3\nI0501 16:36:12.391101 2302 log.go:172] (0xc000448000) (3) Data frame handling\nI0501 16:36:12.391119 2302 log.go:172] (0xc000448000) (3) Data frame sent\nI0501 16:36:12.391159 2302 log.go:172] (0xc000a90630) Data frame received for 5\nI0501 16:36:12.391211 2302 log.go:172] (0xc0005981e0) (5) Data frame handling\nI0501 16:36:12.391592 2302 log.go:172] (0xc000a90630) Data frame received for 3\nI0501 16:36:12.391623 2302 log.go:172] (0xc000448000) (3) Data frame handling\nI0501 16:36:12.394835 2302 log.go:172] (0xc000a90630) Data frame received for 1\nI0501 16:36:12.394869 2302 log.go:172] (0xc000448960) (1) Data frame handling\nI0501 16:36:12.394902 2302 log.go:172] (0xc000448960) (1) Data frame sent\nI0501 16:36:12.394932 2302 log.go:172] (0xc000a90630) (0xc000448960) Stream removed, broadcasting: 1\nI0501 16:36:12.394971 2302 log.go:172] (0xc000a90630) Go away received\nI0501 16:36:12.395437 2302 log.go:172] (0xc000a90630) (0xc000448960) Stream removed, broadcasting: 1\nI0501 16:36:12.395461 2302 log.go:172] (0xc000a90630) (0xc000448000) Stream removed, broadcasting: 3\nI0501 16:36:12.395474 2302 log.go:172] (0xc000a90630) (0xc0005981e0) Stream removed, broadcasting: 5\n" May 1 16:36:12.401: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 16:36:12.401: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 16:36:12.401: INFO: Waiting for statefulset status.replicas updated to 0 May 1 16:36:12.445: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 1 16:36:22.452: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 1 16:36:22.452: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 1 16:36:22.452: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 1 16:36:22.568: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999373s May 1 16:36:23.588: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.933435129s May 1 16:36:24.660: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.913684537s May 1 16:36:25.841: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.841356563s May 1 16:36:26.846: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.660637962s May 1 16:36:27.851: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.655263517s May 1 16:36:28.857: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.650489486s May 1 16:36:29.862: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.644669806s May 1 16:36:30.867: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.639511147s May 1 16:36:31.872: INFO: Verifying statefulset ss doesn't scale past 3 for another 634.652551ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2906 May 1 16:36:32.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2906 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 16:36:33.104: INFO: stderr: "I0501 16:36:33.008026 2323 log.go:172] (0xc00093a370) (0xc0008e2640) Create stream\nI0501 16:36:33.008095 2323 log.go:172] (0xc00093a370) (0xc0008e2640) Stream added, broadcasting: 1\nI0501 16:36:33.010540 2323 log.go:172] (0xc00093a370) Reply frame received for 1\nI0501 16:36:33.010575 2323 log.go:172] (0xc00093a370) (0xc000974000) Create stream\nI0501 16:36:33.010587 2323 log.go:172] (0xc00093a370) (0xc000974000) Stream added, broadcasting: 3\nI0501 16:36:33.011397 2323 log.go:172] (0xc00093a370) Reply frame received for 3\nI0501 16:36:33.011432 2323 log.go:172] (0xc00093a370) (0xc00040c3c0) Create stream\nI0501 16:36:33.011447 2323 log.go:172] (0xc00093a370) (0xc00040c3c0) Stream added, broadcasting: 5\nI0501 16:36:33.012455 2323 log.go:172] (0xc00093a370) Reply frame received for 5\nI0501 16:36:33.097415 2323 log.go:172] (0xc00093a370) Data frame received for 3\nI0501 16:36:33.097457 2323 log.go:172] (0xc000974000) (3) Data frame handling\nI0501 16:36:33.097476 2323 log.go:172] (0xc000974000) (3) Data frame sent\nI0501 16:36:33.097519 2323 log.go:172] (0xc00093a370) Data frame received for 5\nI0501 16:36:33.097532 2323 log.go:172] (0xc00040c3c0) (5) Data frame handling\nI0501 16:36:33.097547 2323 log.go:172] (0xc00040c3c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0501 16:36:33.097650 2323 log.go:172] (0xc00093a370) Data frame received for 3\nI0501 16:36:33.097674 2323 log.go:172] (0xc000974000) (3) Data frame handling\nI0501 16:36:33.098147 2323 log.go:172] (0xc00093a370) Data frame received for 5\nI0501 16:36:33.098182 2323 log.go:172] (0xc00040c3c0) (5) Data frame handling\nI0501 16:36:33.099638 2323 log.go:172] (0xc00093a370) Data frame received for 1\nI0501 16:36:33.099674 2323 log.go:172] (0xc0008e2640) (1) Data frame handling\nI0501 16:36:33.099707 2323 log.go:172] (0xc0008e2640) (1) Data frame sent\nI0501 16:36:33.099735 2323 log.go:172] (0xc00093a370) (0xc0008e2640) Stream removed, broadcasting: 1\nI0501 16:36:33.099769 2323 log.go:172] (0xc00093a370) Go away received\nI0501 16:36:33.100114 2323 log.go:172] (0xc00093a370) (0xc0008e2640) Stream removed, broadcasting: 1\nI0501 16:36:33.100137 2323 log.go:172] (0xc00093a370) (0xc000974000) Stream removed, broadcasting: 3\nI0501 16:36:33.100147 2323 log.go:172] (0xc00093a370) (0xc00040c3c0) Stream removed, broadcasting: 5\n" May 1 16:36:33.104: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 16:36:33.104: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 16:36:33.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2906 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 16:36:33.401: INFO: stderr: "I0501 16:36:33.319554 2345 log.go:172] (0xc000116dc0) (0xc000840640) Create stream\nI0501 16:36:33.319621 2345 log.go:172] (0xc000116dc0) (0xc000840640) Stream added, broadcasting: 1\nI0501 16:36:33.321785 2345 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0501 16:36:33.321827 2345 log.go:172] (0xc000116dc0) (0xc0008406e0) Create stream\nI0501 16:36:33.321838 2345 log.go:172] (0xc000116dc0) (0xc0008406e0) Stream added, broadcasting: 3\nI0501 16:36:33.322537 2345 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0501 16:36:33.322570 2345 log.go:172] (0xc000116dc0) (0xc0009ee000) Create stream\nI0501 16:36:33.322581 2345 log.go:172] (0xc000116dc0) (0xc0009ee000) Stream added, broadcasting: 5\nI0501 16:36:33.323273 2345 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0501 16:36:33.393804 2345 log.go:172] (0xc000116dc0) Data frame received for 3\nI0501 16:36:33.393828 2345 log.go:172] (0xc0008406e0) (3) Data frame handling\nI0501 16:36:33.393843 2345 log.go:172] (0xc0008406e0) (3) Data frame sent\nI0501 16:36:33.393860 2345 log.go:172] (0xc000116dc0) Data frame received for 3\nI0501 16:36:33.393869 2345 log.go:172] (0xc0008406e0) (3) Data frame handling\nI0501 16:36:33.393889 2345 log.go:172] (0xc000116dc0) Data frame received for 5\nI0501 16:36:33.393900 2345 log.go:172] (0xc0009ee000) (5) Data frame handling\nI0501 16:36:33.393918 2345 log.go:172] (0xc0009ee000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0501 16:36:33.394017 2345 log.go:172] (0xc000116dc0) Data frame received for 5\nI0501 16:36:33.394053 2345 log.go:172] (0xc0009ee000) (5) Data frame handling\nI0501 16:36:33.395361 2345 log.go:172] (0xc000116dc0) Data frame received for 1\nI0501 16:36:33.395384 2345 log.go:172] (0xc000840640) (1) Data frame handling\nI0501 16:36:33.395405 2345 log.go:172] (0xc000840640) (1) Data frame sent\nI0501 16:36:33.395424 2345 log.go:172] (0xc000116dc0) (0xc000840640) Stream removed, broadcasting: 1\nI0501 16:36:33.395448 2345 log.go:172] (0xc000116dc0) Go away received\nI0501 16:36:33.395993 2345 log.go:172] (0xc000116dc0) (0xc000840640) Stream removed, broadcasting: 1\nI0501 16:36:33.396019 2345 log.go:172] (0xc000116dc0) (0xc0008406e0) Stream removed, broadcasting: 3\nI0501 16:36:33.396044 2345 log.go:172] (0xc000116dc0) (0xc0009ee000) Stream removed, broadcasting: 5\n" May 1 16:36:33.401: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 16:36:33.401: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 16:36:33.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 16:36:33.612: INFO: stderr: "I0501 16:36:33.534990 2365 log.go:172] (0xc00012adc0) (0xc0009a6640) Create stream\nI0501 16:36:33.535042 2365 log.go:172] (0xc00012adc0) (0xc0009a6640) Stream added, broadcasting: 1\nI0501 16:36:33.537721 2365 log.go:172] (0xc00012adc0) Reply frame received for 1\nI0501 16:36:33.537768 2365 log.go:172] (0xc00012adc0) (0xc0005fc1e0) Create stream\nI0501 16:36:33.537786 2365 log.go:172] (0xc00012adc0) (0xc0005fc1e0) Stream added, broadcasting: 3\nI0501 16:36:33.538725 2365 log.go:172] (0xc00012adc0) Reply frame received for 3\nI0501 16:36:33.538769 2365 log.go:172] (0xc00012adc0) (0xc0008ce000) Create stream\nI0501 16:36:33.538783 2365 log.go:172] (0xc00012adc0) (0xc0008ce000) Stream added, broadcasting: 5\nI0501 16:36:33.539655 2365 log.go:172] (0xc00012adc0) Reply frame received for 5\nI0501 16:36:33.604696 2365 log.go:172] (0xc00012adc0) Data frame received for 3\nI0501 16:36:33.604722 2365 log.go:172] (0xc0005fc1e0) (3) Data frame handling\nI0501 16:36:33.604731 2365 log.go:172] (0xc0005fc1e0) (3) Data frame sent\nI0501 16:36:33.604750 2365 log.go:172] (0xc00012adc0) Data frame received for 5\nI0501 16:36:33.604800 2365 log.go:172] (0xc0008ce000) (5) Data frame handling\nI0501 16:36:33.604835 2365 log.go:172] (0xc0008ce000) (5) Data frame sent\nI0501 16:36:33.604861 2365 log.go:172] (0xc00012adc0) Data frame received for 5\nI0501 16:36:33.604872 2365 log.go:172] (0xc0008ce000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0501 16:36:33.604894 2365 log.go:172] (0xc00012adc0) Data frame received for 3\nI0501 16:36:33.604934 2365 log.go:172] (0xc0005fc1e0) (3) Data frame handling\nI0501 16:36:33.606422 2365 log.go:172] (0xc00012adc0) Data frame received for 1\nI0501 16:36:33.606445 2365 log.go:172] (0xc0009a6640) (1) Data frame handling\nI0501 16:36:33.606458 2365 log.go:172] (0xc0009a6640) (1) Data frame sent\nI0501 16:36:33.606471 2365 log.go:172] (0xc00012adc0) (0xc0009a6640) Stream removed, broadcasting: 1\nI0501 16:36:33.606486 2365 log.go:172] (0xc00012adc0) Go away received\nI0501 16:36:33.606945 2365 log.go:172] (0xc00012adc0) (0xc0009a6640) Stream removed, broadcasting: 1\nI0501 16:36:33.606977 2365 log.go:172] (0xc00012adc0) (0xc0005fc1e0) Stream removed, broadcasting: 3\nI0501 16:36:33.607001 2365 log.go:172] (0xc00012adc0) (0xc0008ce000) Stream removed, broadcasting: 5\n" May 1 16:36:33.612: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 16:36:33.612: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 16:36:33.612: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 1 16:36:53.628: INFO: Deleting all statefulset in ns statefulset-2906 May 1 16:36:53.631: INFO: Scaling statefulset ss to 0 May 1 16:36:53.640: INFO: Waiting for statefulset status.replicas updated to 0 May 1 16:36:53.642: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:36:53.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2906" for this suite. May 1 16:37:05.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:37:06.578: INFO: namespace statefulset-2906 deletion completed in 12.773703052s • [SLOW TEST:95.895 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:37:06.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-8ee07c90-adc4-499c-a31f-71e94f5bf6b6 in namespace container-probe-3079 May 1 16:37:14.092: INFO: Started pod busybox-8ee07c90-adc4-499c-a31f-71e94f5bf6b6 in namespace container-probe-3079 STEP: checking the pod's current state and verifying that restartCount is present May 1 16:37:14.095: INFO: Initial restart count of pod busybox-8ee07c90-adc4-499c-a31f-71e94f5bf6b6 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:41:15.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3079" for this suite. May 1 16:41:21.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:41:22.072: INFO: namespace container-probe-3079 deletion completed in 6.684245911s • [SLOW TEST:255.494 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:41:22.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 1 16:41:39.208: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 16:41:39.527: INFO: Pod pod-with-prestop-exec-hook still exists May 1 16:41:41.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 16:41:41.532: INFO: Pod pod-with-prestop-exec-hook still exists May 1 16:41:43.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 16:41:43.531: INFO: Pod pod-with-prestop-exec-hook still exists May 1 16:41:45.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 16:41:45.531: INFO: Pod pod-with-prestop-exec-hook still exists May 1 16:41:47.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 16:41:47.530: INFO: Pod pod-with-prestop-exec-hook still exists May 1 16:41:49.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 16:41:49.532: INFO: Pod pod-with-prestop-exec-hook still exists May 1 16:41:51.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 16:41:51.532: INFO: Pod pod-with-prestop-exec-hook still exists May 1 16:41:53.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 16:41:53.531: INFO: Pod pod-with-prestop-exec-hook still exists May 1 16:41:55.527: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 16:41:55.532: INFO: Pod pod-with-prestop-exec-hook still exists May 1 16:41:57.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 16:41:57.531: INFO: Pod pod-with-prestop-exec-hook still exists May 1 16:41:59.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 16:41:59.531: INFO: Pod pod-with-prestop-exec-hook still exists May 1 16:42:01.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 16:42:01.532: INFO: Pod pod-with-prestop-exec-hook still exists May 1 16:42:03.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 16:42:03.531: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:42:03.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8745" for this suite. May 1 16:42:29.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:42:29.634: INFO: namespace container-lifecycle-hook-8745 deletion completed in 26.090686102s • [SLOW TEST:67.561 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:42:29.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:42:37.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4450" for this suite. May 1 16:42:44.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:42:44.107: INFO: namespace kubelet-test-4450 deletion completed in 6.201886294s • [SLOW TEST:14.473 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:42:44.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 16:42:44.206: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 1 16:42:49.271: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 1 16:42:49.271: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 1 16:42:51.325: INFO: Creating deployment "test-rollover-deployment" May 1 16:42:51.516: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 1 16:42:53.564: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 1 16:42:53.581: INFO: Ensure that both replica sets have 1 created replica May 1 16:42:53.585: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 1 16:42:53.591: INFO: Updating deployment test-rollover-deployment May 1 16:42:53.591: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 1 16:42:55.923: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 1 16:42:56.138: INFO: Make sure deployment "test-rollover-deployment" is complete May 1 16:42:56.473: INFO: all replica sets need to contain the pod-template-hash label May 1 16:42:56.473: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948175, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948171, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 16:42:58.480: INFO: all replica sets need to contain the pod-template-hash label May 1 16:42:58.480: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948175, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948171, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 16:43:00.482: INFO: all replica sets need to contain the pod-template-hash label May 1 16:43:00.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948175, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948171, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 16:43:02.482: INFO: all replica sets need to contain the pod-template-hash label May 1 16:43:02.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948175, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948171, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 16:43:04.482: INFO: all replica sets need to contain the pod-template-hash label May 1 16:43:04.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948183, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948171, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 16:43:06.490: INFO: all replica sets need to contain the pod-template-hash label May 1 16:43:06.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948183, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948171, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 16:43:08.482: INFO: all replica sets need to contain the pod-template-hash label May 1 16:43:08.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948183, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948171, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 16:43:10.483: INFO: all replica sets need to contain the pod-template-hash label May 1 16:43:10.483: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948183, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948171, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 16:43:12.480: INFO: all replica sets need to contain the pod-template-hash label May 1 16:43:12.481: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948183, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948171, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 16:43:14.543: INFO: May 1 16:43:14.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948172, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948193, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948171, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 16:43:16.482: INFO: May 1 16:43:16.482: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 1 16:43:16.491: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-4287,SelfLink:/apis/apps/v1/namespaces/deployment-4287/deployments/test-rollover-deployment,UID:975ee374-f931-47ed-b1e0-c80753d04c8a,ResourceVersion:8469816,Generation:2,CreationTimestamp:2020-05-01 16:42:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-01 16:42:52 +0000 UTC 2020-05-01 16:42:52 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-01 16:43:14 +0000 UTC 2020-05-01 16:42:51 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 1 16:43:16.494: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-4287,SelfLink:/apis/apps/v1/namespaces/deployment-4287/replicasets/test-rollover-deployment-854595fc44,UID:a7186891-1670-4a4f-ad49-4a6e31a32bc4,ResourceVersion:8469804,Generation:2,CreationTimestamp:2020-05-01 16:42:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 975ee374-f931-47ed-b1e0-c80753d04c8a 0xc002317307 0xc002317308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 1 16:43:16.494: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 1 16:43:16.494: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-4287,SelfLink:/apis/apps/v1/namespaces/deployment-4287/replicasets/test-rollover-controller,UID:a38dd70e-a08c-4590-95ff-1e5c5eae3810,ResourceVersion:8469815,Generation:2,CreationTimestamp:2020-05-01 16:42:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 975ee374-f931-47ed-b1e0-c80753d04c8a 0xc002317237 0xc002317238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 1 16:43:16.494: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-4287,SelfLink:/apis/apps/v1/namespaces/deployment-4287/replicasets/test-rollover-deployment-9b8b997cf,UID:550a5436-9faa-4dc5-8809-c7bf9766c569,ResourceVersion:8469748,Generation:2,CreationTimestamp:2020-05-01 16:42:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 975ee374-f931-47ed-b1e0-c80753d04c8a 0xc0023173d0 0xc0023173d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 1 16:43:16.498: INFO: Pod "test-rollover-deployment-854595fc44-9p2wf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-9p2wf,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-4287,SelfLink:/api/v1/namespaces/deployment-4287/pods/test-rollover-deployment-854595fc44-9p2wf,UID:a0e548b1-5ad1-4452-8d25-8479b86e42ba,ResourceVersion:8469782,Generation:0,CreationTimestamp:2020-05-01 16:42:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 a7186891-1670-4a4f-ad49-4a6e31a32bc4 0xc002317ff7 0xc002317ff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sq9lg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sq9lg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-sq9lg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a94080} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a940a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:42:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:43:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:43:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:42:54 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.96,StartTime:2020-05-01 16:42:55 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-01 16:43:02 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://f0190e1f0bee0f6d614ee4e42c5c27f241c0d99132f2b82dd5c6213889210ee6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:43:16.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4287" for this suite. May 1 16:43:24.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:43:24.587: INFO: namespace deployment-4287 deletion completed in 8.085759246s • [SLOW TEST:40.479 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:43:24.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-a8840190-4494-4d2c-8b0a-d2aa81555612 STEP: Creating a pod to test consume secrets May 1 16:43:24.929: INFO: Waiting up to 5m0s for pod "pod-secrets-967c5743-7180-45af-b4a1-39d7a6a87074" in namespace "secrets-8458" to be "success or failure" May 1 16:43:24.960: INFO: Pod "pod-secrets-967c5743-7180-45af-b4a1-39d7a6a87074": Phase="Pending", Reason="", readiness=false. Elapsed: 30.761449ms May 1 16:43:26.963: INFO: Pod "pod-secrets-967c5743-7180-45af-b4a1-39d7a6a87074": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034457872s May 1 16:43:28.967: INFO: Pod "pod-secrets-967c5743-7180-45af-b4a1-39d7a6a87074": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03839516s May 1 16:43:30.971: INFO: Pod "pod-secrets-967c5743-7180-45af-b4a1-39d7a6a87074": Phase="Running", Reason="", readiness=true. Elapsed: 6.042543419s May 1 16:43:32.976: INFO: Pod "pod-secrets-967c5743-7180-45af-b4a1-39d7a6a87074": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046794719s STEP: Saw pod success May 1 16:43:32.976: INFO: Pod "pod-secrets-967c5743-7180-45af-b4a1-39d7a6a87074" satisfied condition "success or failure" May 1 16:43:32.979: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-967c5743-7180-45af-b4a1-39d7a6a87074 container secret-volume-test: STEP: delete the pod May 1 16:43:33.275: INFO: Waiting for pod pod-secrets-967c5743-7180-45af-b4a1-39d7a6a87074 to disappear May 1 16:43:33.558: INFO: Pod pod-secrets-967c5743-7180-45af-b4a1-39d7a6a87074 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:43:33.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8458" for this suite. May 1 16:43:39.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:43:39.637: INFO: namespace secrets-8458 deletion completed in 6.075444788s • [SLOW TEST:15.050 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:43:39.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin May 1 16:43:39.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9983 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 1 16:43:51.415: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0501 16:43:51.222468 2384 log.go:172] (0xc0004ce210) (0xc000816140) Create stream\nI0501 16:43:51.222497 2384 log.go:172] (0xc0004ce210) (0xc000816140) Stream added, broadcasting: 1\nI0501 16:43:51.230031 2384 log.go:172] (0xc0004ce210) Reply frame received for 1\nI0501 16:43:51.230075 2384 log.go:172] (0xc0004ce210) (0xc000864000) Create stream\nI0501 16:43:51.230088 2384 log.go:172] (0xc0004ce210) (0xc000864000) Stream added, broadcasting: 3\nI0501 16:43:51.231805 2384 log.go:172] (0xc0004ce210) Reply frame received for 3\nI0501 16:43:51.231852 2384 log.go:172] (0xc0004ce210) (0xc0008161e0) Create stream\nI0501 16:43:51.231862 2384 log.go:172] (0xc0004ce210) (0xc0008161e0) Stream added, broadcasting: 5\nI0501 16:43:51.233782 2384 log.go:172] (0xc0004ce210) Reply frame received for 5\nI0501 16:43:51.233809 2384 log.go:172] (0xc0004ce210) (0xc000a861e0) Create stream\nI0501 16:43:51.233816 2384 log.go:172] (0xc0004ce210) (0xc000a861e0) Stream added, broadcasting: 7\nI0501 16:43:51.234436 2384 log.go:172] (0xc0004ce210) Reply frame received for 7\nI0501 16:43:51.234560 2384 log.go:172] (0xc000864000) (3) Writing data frame\nI0501 16:43:51.234641 2384 log.go:172] (0xc000864000) (3) Writing data frame\nI0501 16:43:51.235351 2384 log.go:172] (0xc0004ce210) Data frame received for 5\nI0501 16:43:51.235360 2384 log.go:172] (0xc0008161e0) (5) Data frame handling\nI0501 16:43:51.235366 2384 log.go:172] (0xc0008161e0) (5) Data frame sent\nI0501 16:43:51.235785 2384 log.go:172] (0xc0004ce210) Data frame received for 5\nI0501 16:43:51.235795 2384 log.go:172] (0xc0008161e0) (5) Data frame handling\nI0501 16:43:51.235802 2384 log.go:172] (0xc0008161e0) (5) Data frame sent\nI0501 16:43:51.274265 2384 log.go:172] (0xc0004ce210) Data frame received for 5\nI0501 16:43:51.274288 2384 log.go:172] (0xc0008161e0) (5) Data frame handling\nI0501 16:43:51.274311 2384 log.go:172] (0xc0004ce210) Data frame received for 7\nI0501 16:43:51.274352 2384 log.go:172] (0xc0004ce210) Data frame received for 1\nI0501 16:43:51.274390 2384 log.go:172] (0xc000816140) (1) Data frame handling\nI0501 16:43:51.274407 2384 log.go:172] (0xc000816140) (1) Data frame sent\nI0501 16:43:51.274466 2384 log.go:172] (0xc000a861e0) (7) Data frame handling\nI0501 16:43:51.274493 2384 log.go:172] (0xc0004ce210) (0xc000816140) Stream removed, broadcasting: 1\nI0501 16:43:51.274596 2384 log.go:172] (0xc0004ce210) (0xc000816140) Stream removed, broadcasting: 1\nI0501 16:43:51.274607 2384 log.go:172] (0xc0004ce210) (0xc000864000) Stream removed, broadcasting: 3\nI0501 16:43:51.274612 2384 log.go:172] (0xc0004ce210) (0xc0008161e0) Stream removed, broadcasting: 5\nI0501 16:43:51.274617 2384 log.go:172] (0xc0004ce210) (0xc000a861e0) Stream removed, broadcasting: 7\nI0501 16:43:51.275237 2384 log.go:172] (0xc0004ce210) (0xc000864000) Stream removed, broadcasting: 3\nI0501 16:43:51.275248 2384 log.go:172] (0xc0004ce210) Go away received\n" May 1 16:43:51.415: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:43:53.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9983" for this suite. May 1 16:44:05.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:44:05.588: INFO: namespace kubectl-9983 deletion completed in 12.160405753s • [SLOW TEST:25.951 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:44:05.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:44:14.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4421" for this suite. May 1 16:44:20.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:44:20.624: INFO: namespace namespaces-4421 deletion completed in 6.109012995s STEP: Destroying namespace "nsdeletetest-6173" for this suite. May 1 16:44:20.626: INFO: Namespace nsdeletetest-6173 was already deleted STEP: Destroying namespace "nsdeletetest-9275" for this suite. May 1 16:44:26.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:44:26.760: INFO: namespace nsdeletetest-9275 deletion completed in 6.133913584s • [SLOW TEST:21.172 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:44:26.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-69ad6d89-49b6-4dd9-84a9-8cd2552f8516 STEP: Creating a pod to test consume secrets May 1 16:44:26.844: INFO: Waiting up to 5m0s for pod "pod-secrets-5950ce43-f2eb-40d5-a33d-ffb981f25369" in namespace "secrets-6575" to be "success or failure" May 1 16:44:26.847: INFO: Pod "pod-secrets-5950ce43-f2eb-40d5-a33d-ffb981f25369": Phase="Pending", Reason="", readiness=false. Elapsed: 2.836157ms May 1 16:44:28.870: INFO: Pod "pod-secrets-5950ce43-f2eb-40d5-a33d-ffb981f25369": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026707064s May 1 16:44:30.889: INFO: Pod "pod-secrets-5950ce43-f2eb-40d5-a33d-ffb981f25369": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045325995s May 1 16:44:32.901: INFO: Pod "pod-secrets-5950ce43-f2eb-40d5-a33d-ffb981f25369": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056849413s STEP: Saw pod success May 1 16:44:32.901: INFO: Pod "pod-secrets-5950ce43-f2eb-40d5-a33d-ffb981f25369" satisfied condition "success or failure" May 1 16:44:32.903: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-5950ce43-f2eb-40d5-a33d-ffb981f25369 container secret-volume-test: STEP: delete the pod May 1 16:44:32.973: INFO: Waiting for pod pod-secrets-5950ce43-f2eb-40d5-a33d-ffb981f25369 to disappear May 1 16:44:33.074: INFO: Pod pod-secrets-5950ce43-f2eb-40d5-a33d-ffb981f25369 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:44:33.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6575" for this suite. May 1 16:44:39.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:44:39.195: INFO: namespace secrets-6575 deletion completed in 6.116570631s • [SLOW TEST:12.434 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:44:39.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 1 16:44:44.392: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:44:44.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6198" for this suite. May 1 16:44:50.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:44:50.623: INFO: namespace container-runtime-6198 deletion completed in 6.090309726s • [SLOW TEST:11.428 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:44:50.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 1 16:44:50.678: INFO: Waiting up to 5m0s for pod "downward-api-0aedc33f-30fb-45ba-bbba-2102bbb72ade" in namespace "downward-api-7840" to be "success or failure" May 1 16:44:50.696: INFO: Pod "downward-api-0aedc33f-30fb-45ba-bbba-2102bbb72ade": Phase="Pending", Reason="", readiness=false. Elapsed: 18.522654ms May 1 16:44:52.724: INFO: Pod "downward-api-0aedc33f-30fb-45ba-bbba-2102bbb72ade": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046693253s May 1 16:44:54.729: INFO: Pod "downward-api-0aedc33f-30fb-45ba-bbba-2102bbb72ade": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050870298s May 1 16:44:56.733: INFO: Pod "downward-api-0aedc33f-30fb-45ba-bbba-2102bbb72ade": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055563991s STEP: Saw pod success May 1 16:44:56.733: INFO: Pod "downward-api-0aedc33f-30fb-45ba-bbba-2102bbb72ade" satisfied condition "success or failure" May 1 16:44:56.736: INFO: Trying to get logs from node iruya-worker2 pod downward-api-0aedc33f-30fb-45ba-bbba-2102bbb72ade container dapi-container: STEP: delete the pod May 1 16:44:56.855: INFO: Waiting for pod downward-api-0aedc33f-30fb-45ba-bbba-2102bbb72ade to disappear May 1 16:44:56.859: INFO: Pod downward-api-0aedc33f-30fb-45ba-bbba-2102bbb72ade no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:44:56.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7840" for this suite. May 1 16:45:02.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:45:02.960: INFO: namespace downward-api-7840 deletion completed in 6.096969142s • [SLOW TEST:12.336 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:45:02.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 1 16:45:03.078: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1649,SelfLink:/api/v1/namespaces/watch-1649/configmaps/e2e-watch-test-watch-closed,UID:d2862069-7fa7-456a-ace1-ab77fdb5122c,ResourceVersion:8470222,Generation:0,CreationTimestamp:2020-05-01 16:45:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 1 16:45:03.078: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1649,SelfLink:/api/v1/namespaces/watch-1649/configmaps/e2e-watch-test-watch-closed,UID:d2862069-7fa7-456a-ace1-ab77fdb5122c,ResourceVersion:8470223,Generation:0,CreationTimestamp:2020-05-01 16:45:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 1 16:45:03.091: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1649,SelfLink:/api/v1/namespaces/watch-1649/configmaps/e2e-watch-test-watch-closed,UID:d2862069-7fa7-456a-ace1-ab77fdb5122c,ResourceVersion:8470224,Generation:0,CreationTimestamp:2020-05-01 16:45:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 1 16:45:03.091: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1649,SelfLink:/api/v1/namespaces/watch-1649/configmaps/e2e-watch-test-watch-closed,UID:d2862069-7fa7-456a-ace1-ab77fdb5122c,ResourceVersion:8470225,Generation:0,CreationTimestamp:2020-05-01 16:45:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:45:03.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1649" for this suite. May 1 16:45:09.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:45:09.218: INFO: namespace watch-1649 deletion completed in 6.110243772s • [SLOW TEST:6.258 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:45:09.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4529.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4529.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 16:45:15.311: INFO: DNS probes using dns-4529/dns-test-f1ccfd05-a2bb-45e3-8721-5ba7eb2853cd succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:45:15.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4529" for this suite. May 1 16:45:21.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:45:21.520: INFO: namespace dns-4529 deletion completed in 6.104659312s • [SLOW TEST:12.301 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:45:21.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-7d970803-bef8-4dc7-95c4-213f4a81e4c4 STEP: Creating a pod to test consume configMaps May 1 16:45:21.599: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9487de38-8f58-440e-b7e6-b48c1fbeb5cb" in namespace "projected-3071" to be "success or failure" May 1 16:45:21.614: INFO: Pod "pod-projected-configmaps-9487de38-8f58-440e-b7e6-b48c1fbeb5cb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.588683ms May 1 16:45:23.619: INFO: Pod "pod-projected-configmaps-9487de38-8f58-440e-b7e6-b48c1fbeb5cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020364752s May 1 16:45:25.623: INFO: Pod "pod-projected-configmaps-9487de38-8f58-440e-b7e6-b48c1fbeb5cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02385583s STEP: Saw pod success May 1 16:45:25.623: INFO: Pod "pod-projected-configmaps-9487de38-8f58-440e-b7e6-b48c1fbeb5cb" satisfied condition "success or failure" May 1 16:45:25.626: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-9487de38-8f58-440e-b7e6-b48c1fbeb5cb container projected-configmap-volume-test: STEP: delete the pod May 1 16:45:25.742: INFO: Waiting for pod pod-projected-configmaps-9487de38-8f58-440e-b7e6-b48c1fbeb5cb to disappear May 1 16:45:25.752: INFO: Pod pod-projected-configmaps-9487de38-8f58-440e-b7e6-b48c1fbeb5cb no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:45:25.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3071" for this suite. May 1 16:45:31.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:45:31.892: INFO: namespace projected-3071 deletion completed in 6.136932473s • [SLOW TEST:10.372 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:45:31.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 1 16:45:31.981: INFO: Waiting up to 5m0s for pod "downwardapi-volume-103eddf2-2f87-473b-a595-970ba4b88d23" in namespace "downward-api-7928" to be "success or failure" May 1 16:45:31.998: INFO: Pod "downwardapi-volume-103eddf2-2f87-473b-a595-970ba4b88d23": Phase="Pending", Reason="", readiness=false. Elapsed: 17.33269ms May 1 16:45:34.001: INFO: Pod "downwardapi-volume-103eddf2-2f87-473b-a595-970ba4b88d23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020902555s May 1 16:45:36.005: INFO: Pod "downwardapi-volume-103eddf2-2f87-473b-a595-970ba4b88d23": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024358966s May 1 16:45:38.245: INFO: Pod "downwardapi-volume-103eddf2-2f87-473b-a595-970ba4b88d23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.264452044s STEP: Saw pod success May 1 16:45:38.245: INFO: Pod "downwardapi-volume-103eddf2-2f87-473b-a595-970ba4b88d23" satisfied condition "success or failure" May 1 16:45:38.247: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-103eddf2-2f87-473b-a595-970ba4b88d23 container client-container: STEP: delete the pod May 1 16:45:38.611: INFO: Waiting for pod downwardapi-volume-103eddf2-2f87-473b-a595-970ba4b88d23 to disappear May 1 16:45:38.782: INFO: Pod downwardapi-volume-103eddf2-2f87-473b-a595-970ba4b88d23 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:45:38.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7928" for this suite. May 1 16:45:47.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:45:47.083: INFO: namespace downward-api-7928 deletion completed in 8.296472038s • [SLOW TEST:15.190 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:45:47.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 16:45:47.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2075' May 1 16:45:48.013: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 1 16:45:48.013: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 May 1 16:45:50.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-2075' May 1 16:45:50.288: INFO: stderr: "" May 1 16:45:50.288: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:45:50.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2075" for this suite. May 1 16:46:12.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:46:12.544: INFO: namespace kubectl-2075 deletion completed in 22.156106474s • [SLOW TEST:25.460 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:46:12.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 16:46:12.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2053' May 1 16:46:12.773: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 1 16:46:12.773: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 1 16:46:12.941: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-mw2gk] May 1 16:46:12.941: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-mw2gk" in namespace "kubectl-2053" to be "running and ready" May 1 16:46:13.072: INFO: Pod "e2e-test-nginx-rc-mw2gk": Phase="Pending", Reason="", readiness=false. Elapsed: 131.017234ms May 1 16:46:15.283: INFO: Pod "e2e-test-nginx-rc-mw2gk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.342079959s May 1 16:46:17.369: INFO: Pod "e2e-test-nginx-rc-mw2gk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.427743801s May 1 16:46:19.372: INFO: Pod "e2e-test-nginx-rc-mw2gk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431221921s May 1 16:46:21.401: INFO: Pod "e2e-test-nginx-rc-mw2gk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.46005451s May 1 16:46:23.405: INFO: Pod "e2e-test-nginx-rc-mw2gk": Phase="Running", Reason="", readiness=true. Elapsed: 10.464193725s May 1 16:46:23.405: INFO: Pod "e2e-test-nginx-rc-mw2gk" satisfied condition "running and ready" May 1 16:46:23.405: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-mw2gk] May 1 16:46:23.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-2053' May 1 16:46:23.679: INFO: stderr: "" May 1 16:46:23.679: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 May 1 16:46:23.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2053' May 1 16:46:23.806: INFO: stderr: "" May 1 16:46:23.806: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:46:23.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2053" for this suite. May 1 16:46:45.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:46:46.018: INFO: namespace kubectl-2053 deletion completed in 22.209049072s • [SLOW TEST:33.474 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:46:46.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 16:46:46.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5290' May 1 16:46:46.254: INFO: stderr: "" May 1 16:46:46.254: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 May 1 16:46:46.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-5290' May 1 16:46:51.604: INFO: stderr: "" May 1 16:46:51.604: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:46:51.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5290" for this suite. May 1 16:46:57.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:46:57.857: INFO: namespace kubectl-5290 deletion completed in 6.12092913s • [SLOW TEST:11.838 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:46:57.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-2299 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2299 to expose endpoints map[] May 1 16:46:58.010: INFO: Get endpoints failed (47.91115ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 1 16:46:59.779: INFO: successfully validated that service multi-endpoint-test in namespace services-2299 exposes endpoints map[] (1.816687683s elapsed) STEP: Creating pod pod1 in namespace services-2299 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2299 to expose endpoints map[pod1:[100]] May 1 16:47:04.492: INFO: successfully validated that service multi-endpoint-test in namespace services-2299 exposes endpoints map[pod1:[100]] (4.485346568s elapsed) STEP: Creating pod pod2 in namespace services-2299 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2299 to expose endpoints map[pod1:[100] pod2:[101]] May 1 16:47:07.686: INFO: successfully validated that service multi-endpoint-test in namespace services-2299 exposes endpoints map[pod1:[100] pod2:[101]] (3.191412883s elapsed) STEP: Deleting pod pod1 in namespace services-2299 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2299 to expose endpoints map[pod2:[101]] May 1 16:47:08.744: INFO: successfully validated that service multi-endpoint-test in namespace services-2299 exposes endpoints map[pod2:[101]] (1.053112295s elapsed) STEP: Deleting pod pod2 in namespace services-2299 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2299 to expose endpoints map[] May 1 16:47:09.763: INFO: successfully validated that service multi-endpoint-test in namespace services-2299 exposes endpoints map[] (1.014781431s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:47:09.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2299" for this suite. May 1 16:47:31.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:47:32.025: INFO: namespace services-2299 deletion completed in 22.109560398s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:34.168 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:47:32.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-2cd2a075-389f-4154-ad1c-12f68ecd3cd3 STEP: Creating a pod to test consume configMaps May 1 16:47:32.115: INFO: Waiting up to 5m0s for pod "pod-configmaps-9b3db887-b014-46d7-b464-295adbc42a47" in namespace "configmap-1139" to be "success or failure" May 1 16:47:32.119: INFO: Pod "pod-configmaps-9b3db887-b014-46d7-b464-295adbc42a47": Phase="Pending", Reason="", readiness=false. Elapsed: 3.86352ms May 1 16:47:34.123: INFO: Pod "pod-configmaps-9b3db887-b014-46d7-b464-295adbc42a47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008099307s May 1 16:47:36.148: INFO: Pod "pod-configmaps-9b3db887-b014-46d7-b464-295adbc42a47": Phase="Running", Reason="", readiness=true. Elapsed: 4.033485929s May 1 16:47:39.042: INFO: Pod "pod-configmaps-9b3db887-b014-46d7-b464-295adbc42a47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.926684894s STEP: Saw pod success May 1 16:47:39.042: INFO: Pod "pod-configmaps-9b3db887-b014-46d7-b464-295adbc42a47" satisfied condition "success or failure" May 1 16:47:39.044: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-9b3db887-b014-46d7-b464-295adbc42a47 container configmap-volume-test: STEP: delete the pod May 1 16:47:39.983: INFO: Waiting for pod pod-configmaps-9b3db887-b014-46d7-b464-295adbc42a47 to disappear May 1 16:47:40.754: INFO: Pod pod-configmaps-9b3db887-b014-46d7-b464-295adbc42a47 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:47:40.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1139" for this suite. May 1 16:47:46.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:47:46.912: INFO: namespace configmap-1139 deletion completed in 6.152795946s • [SLOW TEST:14.886 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:47:46.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 1 16:47:46.976: INFO: Waiting up to 5m0s for pod "pod-18c999a5-0dc6-475f-87f6-62af82e7a2ca" in namespace "emptydir-4772" to be "success or failure" May 1 16:47:46.988: INFO: Pod "pod-18c999a5-0dc6-475f-87f6-62af82e7a2ca": Phase="Pending", Reason="", readiness=false. Elapsed: 12.617758ms May 1 16:47:49.215: INFO: Pod "pod-18c999a5-0dc6-475f-87f6-62af82e7a2ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239428142s May 1 16:47:51.219: INFO: Pod "pod-18c999a5-0dc6-475f-87f6-62af82e7a2ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243627992s May 1 16:47:53.223: INFO: Pod "pod-18c999a5-0dc6-475f-87f6-62af82e7a2ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.247243094s STEP: Saw pod success May 1 16:47:53.223: INFO: Pod "pod-18c999a5-0dc6-475f-87f6-62af82e7a2ca" satisfied condition "success or failure" May 1 16:47:53.225: INFO: Trying to get logs from node iruya-worker2 pod pod-18c999a5-0dc6-475f-87f6-62af82e7a2ca container test-container: STEP: delete the pod May 1 16:47:53.260: INFO: Waiting for pod pod-18c999a5-0dc6-475f-87f6-62af82e7a2ca to disappear May 1 16:47:53.286: INFO: Pod pod-18c999a5-0dc6-475f-87f6-62af82e7a2ca no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:47:53.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4772" for this suite. May 1 16:47:59.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:47:59.400: INFO: namespace emptydir-4772 deletion completed in 6.110369212s • [SLOW TEST:12.488 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:47:59.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 1 16:47:59.463: INFO: Waiting up to 5m0s for pod "pod-9b4aa11d-ddc7-4f5f-bb94-b50eb537f72f" in namespace "emptydir-3005" to be "success or failure" May 1 16:47:59.466: INFO: Pod "pod-9b4aa11d-ddc7-4f5f-bb94-b50eb537f72f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.748461ms May 1 16:48:01.471: INFO: Pod "pod-9b4aa11d-ddc7-4f5f-bb94-b50eb537f72f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007308265s May 1 16:48:03.475: INFO: Pod "pod-9b4aa11d-ddc7-4f5f-bb94-b50eb537f72f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011563072s STEP: Saw pod success May 1 16:48:03.475: INFO: Pod "pod-9b4aa11d-ddc7-4f5f-bb94-b50eb537f72f" satisfied condition "success or failure" May 1 16:48:03.478: INFO: Trying to get logs from node iruya-worker pod pod-9b4aa11d-ddc7-4f5f-bb94-b50eb537f72f container test-container: STEP: delete the pod May 1 16:48:03.498: INFO: Waiting for pod pod-9b4aa11d-ddc7-4f5f-bb94-b50eb537f72f to disappear May 1 16:48:03.502: INFO: Pod pod-9b4aa11d-ddc7-4f5f-bb94-b50eb537f72f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:48:03.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3005" for this suite. May 1 16:48:09.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:48:09.598: INFO: namespace emptydir-3005 deletion completed in 6.092182691s • [SLOW TEST:10.197 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:48:09.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium May 1 16:48:09.660: INFO: Waiting up to 5m0s for pod "pod-191eeffa-2103-4352-96f2-6efa6cdc6fca" in namespace "emptydir-2226" to be "success or failure" May 1 16:48:09.677: INFO: Pod "pod-191eeffa-2103-4352-96f2-6efa6cdc6fca": Phase="Pending", Reason="", readiness=false. Elapsed: 16.850521ms May 1 16:48:11.754: INFO: Pod "pod-191eeffa-2103-4352-96f2-6efa6cdc6fca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094027311s May 1 16:48:13.758: INFO: Pod "pod-191eeffa-2103-4352-96f2-6efa6cdc6fca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098068997s STEP: Saw pod success May 1 16:48:13.758: INFO: Pod "pod-191eeffa-2103-4352-96f2-6efa6cdc6fca" satisfied condition "success or failure" May 1 16:48:13.760: INFO: Trying to get logs from node iruya-worker2 pod pod-191eeffa-2103-4352-96f2-6efa6cdc6fca container test-container: STEP: delete the pod May 1 16:48:13.779: INFO: Waiting for pod pod-191eeffa-2103-4352-96f2-6efa6cdc6fca to disappear May 1 16:48:13.795: INFO: Pod pod-191eeffa-2103-4352-96f2-6efa6cdc6fca no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:48:13.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2226" for this suite. May 1 16:48:19.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:48:19.922: INFO: namespace emptydir-2226 deletion completed in 6.123534642s • [SLOW TEST:10.323 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:48:19.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 1 16:48:20.042: INFO: Waiting up to 5m0s for pod "downward-api-99fd3335-807d-4a12-8195-96fb8d5b16d0" in namespace "downward-api-6588" to be "success or failure" May 1 16:48:20.044: INFO: Pod "downward-api-99fd3335-807d-4a12-8195-96fb8d5b16d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.471356ms May 1 16:48:22.049: INFO: Pod "downward-api-99fd3335-807d-4a12-8195-96fb8d5b16d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006889587s May 1 16:48:24.053: INFO: Pod "downward-api-99fd3335-807d-4a12-8195-96fb8d5b16d0": Phase="Running", Reason="", readiness=true. Elapsed: 4.010539728s May 1 16:48:26.057: INFO: Pod "downward-api-99fd3335-807d-4a12-8195-96fb8d5b16d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014553459s STEP: Saw pod success May 1 16:48:26.057: INFO: Pod "downward-api-99fd3335-807d-4a12-8195-96fb8d5b16d0" satisfied condition "success or failure" May 1 16:48:26.059: INFO: Trying to get logs from node iruya-worker pod downward-api-99fd3335-807d-4a12-8195-96fb8d5b16d0 container dapi-container: STEP: delete the pod May 1 16:48:26.113: INFO: Waiting for pod downward-api-99fd3335-807d-4a12-8195-96fb8d5b16d0 to disappear May 1 16:48:26.131: INFO: Pod downward-api-99fd3335-807d-4a12-8195-96fb8d5b16d0 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:48:26.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6588" for this suite. May 1 16:48:32.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:48:32.254: INFO: namespace downward-api-6588 deletion completed in 6.114496998s • [SLOW TEST:12.331 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:48:32.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs May 1 16:48:32.348: INFO: Waiting up to 5m0s for pod "pod-3611b367-ef25-490c-9605-3c287cbd05dc" in namespace "emptydir-7066" to be "success or failure" May 1 16:48:32.355: INFO: Pod "pod-3611b367-ef25-490c-9605-3c287cbd05dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.875797ms May 1 16:48:34.359: INFO: Pod "pod-3611b367-ef25-490c-9605-3c287cbd05dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01090027s May 1 16:48:36.363: INFO: Pod "pod-3611b367-ef25-490c-9605-3c287cbd05dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014977487s STEP: Saw pod success May 1 16:48:36.363: INFO: Pod "pod-3611b367-ef25-490c-9605-3c287cbd05dc" satisfied condition "success or failure" May 1 16:48:36.366: INFO: Trying to get logs from node iruya-worker2 pod pod-3611b367-ef25-490c-9605-3c287cbd05dc container test-container: STEP: delete the pod May 1 16:48:36.388: INFO: Waiting for pod pod-3611b367-ef25-490c-9605-3c287cbd05dc to disappear May 1 16:48:36.415: INFO: Pod pod-3611b367-ef25-490c-9605-3c287cbd05dc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:48:36.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7066" for this suite. May 1 16:48:42.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:48:42.515: INFO: namespace emptydir-7066 deletion completed in 6.09605927s • [SLOW TEST:10.260 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:48:42.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0501 16:48:43.636640 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 16:48:43.636: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:48:43.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4720" for this suite. May 1 16:48:49.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:48:49.770: INFO: namespace gc-4720 deletion completed in 6.130913538s • [SLOW TEST:7.255 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:48:49.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 1 16:48:50.182: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01d528ce-8b0d-4386-a31d-f4f68a636077" in namespace "projected-1072" to be "success or failure" May 1 16:48:50.218: INFO: Pod "downwardapi-volume-01d528ce-8b0d-4386-a31d-f4f68a636077": Phase="Pending", Reason="", readiness=false. Elapsed: 35.881637ms May 1 16:48:52.221: INFO: Pod "downwardapi-volume-01d528ce-8b0d-4386-a31d-f4f68a636077": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038857743s May 1 16:48:54.306: INFO: Pod "downwardapi-volume-01d528ce-8b0d-4386-a31d-f4f68a636077": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12304601s May 1 16:48:56.351: INFO: Pod "downwardapi-volume-01d528ce-8b0d-4386-a31d-f4f68a636077": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.168236403s STEP: Saw pod success May 1 16:48:56.351: INFO: Pod "downwardapi-volume-01d528ce-8b0d-4386-a31d-f4f68a636077" satisfied condition "success or failure" May 1 16:48:56.354: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-01d528ce-8b0d-4386-a31d-f4f68a636077 container client-container: STEP: delete the pod May 1 16:48:56.859: INFO: Waiting for pod downwardapi-volume-01d528ce-8b0d-4386-a31d-f4f68a636077 to disappear May 1 16:48:57.090: INFO: Pod downwardapi-volume-01d528ce-8b0d-4386-a31d-f4f68a636077 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:48:57.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1072" for this suite. May 1 16:49:03.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:49:03.990: INFO: namespace projected-1072 deletion completed in 6.897067681s • [SLOW TEST:14.220 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:49:03.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 16:49:04.073: INFO: Creating deployment "nginx-deployment" May 1 16:49:04.106: INFO: Waiting for observed generation 1 May 1 16:49:06.168: INFO: Waiting for all required pods to come up May 1 16:49:06.313: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 1 16:49:20.629: INFO: Waiting for deployment "nginx-deployment" to complete May 1 16:49:20.634: INFO: Updating deployment "nginx-deployment" with a non-existent image May 1 16:49:20.640: INFO: Updating deployment nginx-deployment May 1 16:49:20.640: INFO: Waiting for observed generation 2 May 1 16:49:22.684: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 1 16:49:22.687: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 1 16:49:22.690: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 1 16:49:22.699: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 1 16:49:22.699: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 1 16:49:22.701: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 1 16:49:22.707: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 1 16:49:22.707: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 1 16:49:22.713: INFO: Updating deployment nginx-deployment May 1 16:49:22.713: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 1 16:49:23.029: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 1 16:49:23.357: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 1 16:49:23.690: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-3030,SelfLink:/apis/apps/v1/namespaces/deployment-3030/deployments/nginx-deployment,UID:e983cd28-94fc-4385-a417-6b5650878c65,ResourceVersion:8471361,Generation:3,CreationTimestamp:2020-05-01 16:49:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-05-01 16:49:22 +0000 UTC 2020-05-01 16:49:04 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-05-01 16:49:23 +0000 UTC 2020-05-01 16:49:23 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} May 1 16:49:23.774: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-3030,SelfLink:/apis/apps/v1/namespaces/deployment-3030/replicasets/nginx-deployment-55fb7cb77f,UID:1a1d0592-2001-42f4-84b6-36d71944465a,ResourceVersion:8471344,Generation:3,CreationTimestamp:2020-05-01 16:49:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e983cd28-94fc-4385-a417-6b5650878c65 0xc000995117 0xc000995118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 1 16:49:23.774: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 1 16:49:23.774: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-3030,SelfLink:/apis/apps/v1/namespaces/deployment-3030/replicasets/nginx-deployment-7b8c6f4498,UID:dbd75ba4-58e5-4350-8a51-788cf283753d,ResourceVersion:8471385,Generation:3,CreationTimestamp:2020-05-01 16:49:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e983cd28-94fc-4385-a417-6b5650878c65 0xc0009951e7 0xc0009951e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 1 16:49:23.873: INFO: Pod "nginx-deployment-55fb7cb77f-2cmsr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2cmsr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-55fb7cb77f-2cmsr,UID:9e98192b-3a79-4af5-80ea-3e4e1e5568fe,ResourceVersion:8471336,Generation:0,CreationTimestamp:2020-05-01 16:49:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1a1d0592-2001-42f4-84b6-36d71944465a 0xc002522947 0xc002522948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025229d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025229f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:20 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-01 16:49:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.873: INFO: Pod "nginx-deployment-55fb7cb77f-58hk8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-58hk8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-55fb7cb77f-58hk8,UID:d8625e8e-4342-4f71-8d51-95d2cf5ce0a3,ResourceVersion:8471386,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1a1d0592-2001-42f4-84b6-36d71944465a 0xc002522ac0 0xc002522ac1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002522b40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002522b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.873: INFO: Pod "nginx-deployment-55fb7cb77f-8257g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8257g,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-55fb7cb77f-8257g,UID:1545b07e-96e1-434a-bbd3-1870389f0e1f,ResourceVersion:8471388,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1a1d0592-2001-42f4-84b6-36d71944465a 0xc002522be7 0xc002522be8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002522c60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002522c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.873: INFO: Pod "nginx-deployment-55fb7cb77f-blpz5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-blpz5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-55fb7cb77f-blpz5,UID:9ff1786c-eae2-4745-8c8b-dd9635814081,ResourceVersion:8471382,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1a1d0592-2001-42f4-84b6-36d71944465a 0xc002522d17 0xc002522d18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002522d90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002522db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.873: INFO: Pod "nginx-deployment-55fb7cb77f-cjnfw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cjnfw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-55fb7cb77f-cjnfw,UID:56d25d26-59de-45e7-bfa3-8f443289c2da,ResourceVersion:8471326,Generation:0,CreationTimestamp:2020-05-01 16:49:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1a1d0592-2001-42f4-84b6-36d71944465a 0xc002522e37 0xc002522e38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002522eb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002522ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:20 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-01 16:49:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.873: INFO: Pod "nginx-deployment-55fb7cb77f-g9mfw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-g9mfw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-55fb7cb77f-g9mfw,UID:5ca5bf22-4719-45d4-a539-460f944b92a1,ResourceVersion:8471310,Generation:0,CreationTimestamp:2020-05-01 16:49:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1a1d0592-2001-42f4-84b6-36d71944465a 0xc002522fa0 0xc002522fa1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002523020} {node.kubernetes.io/unreachable Exists NoExecute 0xc002523040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:20 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-01 16:49:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.874: INFO: Pod "nginx-deployment-55fb7cb77f-gv5g2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gv5g2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-55fb7cb77f-gv5g2,UID:3c21f869-2e56-413b-adb7-b4f9ababcc94,ResourceVersion:8471352,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1a1d0592-2001-42f4-84b6-36d71944465a 0xc002523230 0xc002523231}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025232e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002523300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.874: INFO: Pod "nginx-deployment-55fb7cb77f-jhvdb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jhvdb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-55fb7cb77f-jhvdb,UID:9916691f-a52b-4896-ad44-94f27e2d847e,ResourceVersion:8471366,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1a1d0592-2001-42f4-84b6-36d71944465a 0xc0025233f7 0xc0025233f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025234b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025234f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.874: INFO: Pod "nginx-deployment-55fb7cb77f-m9s4t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-m9s4t,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-55fb7cb77f-m9s4t,UID:2cb84b4d-208f-400a-9b6e-0947635e611e,ResourceVersion:8471396,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1a1d0592-2001-42f4-84b6-36d71944465a 0xc002523577 0xc002523578}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002523690} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025236b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.874: INFO: Pod "nginx-deployment-55fb7cb77f-pflln" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pflln,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-55fb7cb77f-pflln,UID:0ca075f0-3d86-4bcd-a31f-6d717739f4c3,ResourceVersion:8471364,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1a1d0592-2001-42f4-84b6-36d71944465a 0xc002523807 0xc002523808}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025238b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025238d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.874: INFO: Pod "nginx-deployment-55fb7cb77f-qj2tp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qj2tp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-55fb7cb77f-qj2tp,UID:bd605cb8-d64a-4415-a949-fe4393345e28,ResourceVersion:8471313,Generation:0,CreationTimestamp:2020-05-01 16:49:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1a1d0592-2001-42f4-84b6-36d71944465a 0xc0025239a7 0xc0025239a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002523a30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002523a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:20 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-01 16:49:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.874: INFO: Pod "nginx-deployment-55fb7cb77f-wcscr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wcscr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-55fb7cb77f-wcscr,UID:0ffebf5c-0141-443b-bfc4-973aecd4b986,ResourceVersion:8471387,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1a1d0592-2001-42f4-84b6-36d71944465a 0xc002523c30 0xc002523c31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002523d30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002523d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.874: INFO: Pod "nginx-deployment-55fb7cb77f-z6m25" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-z6m25,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-55fb7cb77f-z6m25,UID:9db36439-f3b0-436c-b8d2-266ee19e4a80,ResourceVersion:8471334,Generation:0,CreationTimestamp:2020-05-01 16:49:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1a1d0592-2001-42f4-84b6-36d71944465a 0xc002523e07 0xc002523e08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002523e80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002523ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:20 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-01 16:49:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.874: INFO: Pod "nginx-deployment-7b8c6f4498-567gm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-567gm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-567gm,UID:1eeee6ac-3e61-4933-a40f-2fe0fd476288,ResourceVersion:8471389,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc000ca0060 0xc000ca0061}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ca0130} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ca0150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.875: INFO: Pod "nginx-deployment-7b8c6f4498-6lcwh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6lcwh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-6lcwh,UID:eafc10a9-872a-4357-b9a1-bd5ebc77f6b7,ResourceVersion:8471356,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc000ca0237 0xc000ca0238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ca02b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ca0320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.875: INFO: Pod "nginx-deployment-7b8c6f4498-8w5cs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8w5cs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-8w5cs,UID:e2cf45ce-df70-4282-bc21-76de4648d98b,ResourceVersion:8471397,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc000ca0417 0xc000ca0418}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ca0500} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ca0580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-01 16:49:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.875: INFO: Pod "nginx-deployment-7b8c6f4498-clbq4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-clbq4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-clbq4,UID:e0baa7df-d3d2-4d25-8166-2d64910f0c98,ResourceVersion:8471241,Generation:0,CreationTimestamp:2020-05-01 16:49:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc000ca0687 0xc000ca0688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ca0700} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ca0730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.220,StartTime:2020-05-01 16:49:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 16:49:13 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://bff254896a1d756507b3c0b0bd82781471fbbacf6a1360d97f6658fe34ed08d3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.875: INFO: Pod "nginx-deployment-7b8c6f4498-fngp2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fngp2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-fngp2,UID:8b5f7b0b-0d79-48fc-a1c7-d9c087d4438f,ResourceVersion:8471260,Generation:0,CreationTimestamp:2020-05-01 16:49:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc000ca0987 0xc000ca0988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ca0a20} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ca0a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.222,StartTime:2020-05-01 16:49:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 16:49:15 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://cf0a725979f6b9a35ba92010163c078f9c4feaacd48f7a1a97e1feaccefca5d8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.875: INFO: Pod "nginx-deployment-7b8c6f4498-gbfjz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gbfjz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-gbfjz,UID:d759dad8-1766-41e8-84f7-36c250c8f389,ResourceVersion:8471379,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc000ca0bc7 0xc000ca0bc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ca0ca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ca0cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.875: INFO: Pod "nginx-deployment-7b8c6f4498-hbsrx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hbsrx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-hbsrx,UID:8f41ba33-0cb4-45d6-83ca-781e64a25834,ResourceVersion:8471381,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc000ca0d87 0xc000ca0d88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ca0e90} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ca0eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.875: INFO: Pod "nginx-deployment-7b8c6f4498-hcnpm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hcnpm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-hcnpm,UID:c630bc68-376c-4b57-9dbb-18a1a3c6aaf0,ResourceVersion:8471378,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc000ca0f57 0xc000ca0f58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ca1060} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ca10b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.876: INFO: Pod "nginx-deployment-7b8c6f4498-hr22l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hr22l,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-hr22l,UID:8f900bd3-bfa9-4504-aedf-967918be1772,ResourceVersion:8471380,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc000ca1137 0xc000ca1138}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ca11b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ca11d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.876: INFO: Pod "nginx-deployment-7b8c6f4498-ktm4q" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ktm4q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-ktm4q,UID:9e60074d-ebbf-43c4-a499-7617091b7cff,ResourceVersion:8471224,Generation:0,CreationTimestamp:2020-05-01 16:49:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc000ca1267 0xc000ca1268}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ca13c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ca13e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.219,StartTime:2020-05-01 16:49:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 16:49:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://bc535cc41df408947ccf6d1b6792ad012db01ff1627e6ae5bb321fb98788b239}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.876: INFO: Pod "nginx-deployment-7b8c6f4498-lk9pb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lk9pb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-lk9pb,UID:f92a28c6-405c-4b43-bad4-4e24f6b21e2a,ResourceVersion:8471266,Generation:0,CreationTimestamp:2020-05-01 16:49:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc000ca1687 0xc000ca1688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ca1770} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ca1790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.110,StartTime:2020-05-01 16:49:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 16:49:15 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b4bc29ab3bcedb3310d359857e890a6fecba31215d0dd87b10c55f2deaec5043}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.876: INFO: Pod "nginx-deployment-7b8c6f4498-lzgvg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lzgvg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-lzgvg,UID:a2b5497b-018c-4ae5-9ecf-cc9f0f3017fd,ResourceVersion:8471247,Generation:0,CreationTimestamp:2020-05-01 16:49:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc000ca1897 0xc000ca1898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ca1910} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ca1930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.109,StartTime:2020-05-01 16:49:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 16:49:15 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e122c3b9418d14d8ed1d91786ed19f4872d73ce6210526bf807438eadc5bc2f0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.876: INFO: Pod "nginx-deployment-7b8c6f4498-m59t5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-m59t5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-m59t5,UID:8ba01f45-c907-44f2-9e3f-2914d929654d,ResourceVersion:8471255,Generation:0,CreationTimestamp:2020-05-01 16:49:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc000ca1a27 0xc000ca1a28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ca1ab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ca1ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.221,StartTime:2020-05-01 16:49:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 16:49:15 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9bbb1f3e5dbb349bd05e45d8d96a7ce651bdacba1bec5a7b26827f622cff32e5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.876: INFO: Pod "nginx-deployment-7b8c6f4498-mj5hp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mj5hp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-mj5hp,UID:7e9f5d14-3822-48af-bf0f-ff635ba7351e,ResourceVersion:8471383,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc000ca1ba7 0xc000ca1ba8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ca1c30} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ca1c50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.876: INFO: Pod "nginx-deployment-7b8c6f4498-qccx5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qccx5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-qccx5,UID:20d31039-1b71-4443-8580-6413d2977528,ResourceVersion:8471392,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc000ca1cd7 0xc000ca1cd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ca1d60} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ca1d80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.876: INFO: Pod "nginx-deployment-7b8c6f4498-qxb6v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qxb6v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-qxb6v,UID:22a31a74-ea1c-41f0-9440-83d1a3f45b75,ResourceVersion:8471402,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc000ca1e07 0xc000ca1e08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ca1e80} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ca1ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-01 16:49:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.876: INFO: Pod "nginx-deployment-7b8c6f4498-v4jvf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v4jvf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-v4jvf,UID:594186a7-7528-4dbb-a70f-919011d1c04d,ResourceVersion:8471390,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc000ca1f87 0xc000ca1f88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00291c010} {node.kubernetes.io/unreachable Exists NoExecute 0xc00291c030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.877: INFO: Pod "nginx-deployment-7b8c6f4498-xj49c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xj49c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-xj49c,UID:c944152a-7bed-42c2-8084-043ffb0b2703,ResourceVersion:8471391,Generation:0,CreationTimestamp:2020-05-01 16:49:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc00291c117 0xc00291c118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00291c1d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00291c220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.877: INFO: Pod "nginx-deployment-7b8c6f4498-zgxvj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zgxvj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-zgxvj,UID:d4196ba6-582e-4e6e-a248-8b27ce8d0abf,ResourceVersion:8471235,Generation:0,CreationTimestamp:2020-05-01 16:49:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc00291c367 0xc00291c368}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00291c3e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00291c430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.108,StartTime:2020-05-01 16:49:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 16:49:13 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://db27a3da5faa44a24e937654e3cf24b2192beb13a95d403b6741b72bfaa0c4c2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 16:49:23.877: INFO: Pod "nginx-deployment-7b8c6f4498-zxw4q" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zxw4q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3030,SelfLink:/api/v1/namespaces/deployment-3030/pods/nginx-deployment-7b8c6f4498-zxw4q,UID:2c4fd880-89a9-49b1-8ea8-5d08200a30a1,ResourceVersion:8471217,Generation:0,CreationTimestamp:2020-05-01 16:49:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dbd75ba4-58e5-4350-8a51-788cf283753d 0xc00291c5a7 0xc00291c5a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mldxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mldxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mldxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00291c6b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00291c6d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:49:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.107,StartTime:2020-05-01 16:49:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 16:49:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1fbf7e467de71a2cbd8003a4837606634084f7d36f2be43aa10ab581fd7b02d7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:49:23.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3030" for this suite. May 1 16:49:46.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:49:46.405: INFO: namespace deployment-3030 deletion completed in 22.368029207s • [SLOW TEST:42.414 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:49:46.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 1 16:49:46.840: INFO: Waiting up to 5m0s for pod "downward-api-db466d60-ef0b-4d79-9be8-26c0755767f8" in namespace "downward-api-6527" to be "success or failure" May 1 16:49:46.890: INFO: Pod "downward-api-db466d60-ef0b-4d79-9be8-26c0755767f8": Phase="Pending", Reason="", readiness=false. Elapsed: 50.236942ms May 1 16:49:48.895: INFO: Pod "downward-api-db466d60-ef0b-4d79-9be8-26c0755767f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05510905s May 1 16:49:50.899: INFO: Pod "downward-api-db466d60-ef0b-4d79-9be8-26c0755767f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059581591s May 1 16:49:52.904: INFO: Pod "downward-api-db466d60-ef0b-4d79-9be8-26c0755767f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063862301s May 1 16:49:54.907: INFO: Pod "downward-api-db466d60-ef0b-4d79-9be8-26c0755767f8": Phase="Running", Reason="", readiness=true. Elapsed: 8.067494977s May 1 16:49:56.911: INFO: Pod "downward-api-db466d60-ef0b-4d79-9be8-26c0755767f8": Phase="Running", Reason="", readiness=true. Elapsed: 10.071523766s May 1 16:49:58.916: INFO: Pod "downward-api-db466d60-ef0b-4d79-9be8-26c0755767f8": Phase="Running", Reason="", readiness=true. Elapsed: 12.075737284s May 1 16:50:01.181: INFO: Pod "downward-api-db466d60-ef0b-4d79-9be8-26c0755767f8": Phase="Running", Reason="", readiness=true. Elapsed: 14.340981354s May 1 16:50:03.184: INFO: Pod "downward-api-db466d60-ef0b-4d79-9be8-26c0755767f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.344720655s STEP: Saw pod success May 1 16:50:03.185: INFO: Pod "downward-api-db466d60-ef0b-4d79-9be8-26c0755767f8" satisfied condition "success or failure" May 1 16:50:03.187: INFO: Trying to get logs from node iruya-worker pod downward-api-db466d60-ef0b-4d79-9be8-26c0755767f8 container dapi-container: STEP: delete the pod May 1 16:50:03.339: INFO: Waiting for pod downward-api-db466d60-ef0b-4d79-9be8-26c0755767f8 to disappear May 1 16:50:03.358: INFO: Pod downward-api-db466d60-ef0b-4d79-9be8-26c0755767f8 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:50:03.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6527" for this suite. May 1 16:50:09.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:50:09.501: INFO: namespace downward-api-6527 deletion completed in 6.139394485s • [SLOW TEST:23.095 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:50:09.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command May 1 16:50:09.601: INFO: Waiting up to 5m0s for pod "client-containers-37873bda-62a9-4249-b33d-6981fc2d5767" in namespace "containers-2411" to be "success or failure" May 1 16:50:09.610: INFO: Pod "client-containers-37873bda-62a9-4249-b33d-6981fc2d5767": Phase="Pending", Reason="", readiness=false. Elapsed: 9.0206ms May 1 16:50:11.851: INFO: Pod "client-containers-37873bda-62a9-4249-b33d-6981fc2d5767": Phase="Pending", Reason="", readiness=false. Elapsed: 2.250319002s May 1 16:50:13.855: INFO: Pod "client-containers-37873bda-62a9-4249-b33d-6981fc2d5767": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.254257197s STEP: Saw pod success May 1 16:50:13.855: INFO: Pod "client-containers-37873bda-62a9-4249-b33d-6981fc2d5767" satisfied condition "success or failure" May 1 16:50:13.859: INFO: Trying to get logs from node iruya-worker pod client-containers-37873bda-62a9-4249-b33d-6981fc2d5767 container test-container: STEP: delete the pod May 1 16:50:13.903: INFO: Waiting for pod client-containers-37873bda-62a9-4249-b33d-6981fc2d5767 to disappear May 1 16:50:13.923: INFO: Pod client-containers-37873bda-62a9-4249-b33d-6981fc2d5767 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:50:13.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2411" for this suite. May 1 16:50:19.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:50:20.045: INFO: namespace containers-2411 deletion completed in 6.118301564s • [SLOW TEST:10.543 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:50:20.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 1 16:50:20.104: INFO: PodSpec: initContainers in spec.initContainers May 1 16:51:12.912: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-8ad4ec48-a38a-4796-b51a-84074e4fdab1", GenerateName:"", Namespace:"init-container-3844", SelfLink:"/api/v1/namespaces/init-container-3844/pods/pod-init-8ad4ec48-a38a-4796-b51a-84074e4fdab1", UID:"6b7c9374-e5a8-4f86-936e-bbbcdbb610ac", ResourceVersion:"8471927", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723948620, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"104711680"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-dvn8g", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00249c000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dvn8g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dvn8g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dvn8g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002fd2088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002ed2000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002fd2110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002fd2130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002fd2138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002fd213c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948620, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948620, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948620, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948620, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.127", StartTime:(*v1.Time)(0xc002c76060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0020ee070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0020ee0e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://56d49ad0f6db1dce254ba366dbf32cd02b2565683f788367498e7e0486545dff"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c760a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c76080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:51:12.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3844" for this suite. May 1 16:51:34.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:51:35.066: INFO: namespace init-container-3844 deletion completed in 22.145104181s • [SLOW TEST:75.020 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:51:35.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args May 1 16:51:35.148: INFO: Waiting up to 5m0s for pod "var-expansion-8057734c-3a27-4209-b1ac-00f2516413e9" in namespace "var-expansion-243" to be "success or failure" May 1 16:51:35.159: INFO: Pod "var-expansion-8057734c-3a27-4209-b1ac-00f2516413e9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.317758ms May 1 16:51:37.254: INFO: Pod "var-expansion-8057734c-3a27-4209-b1ac-00f2516413e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105980486s May 1 16:51:39.257: INFO: Pod "var-expansion-8057734c-3a27-4209-b1ac-00f2516413e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109402968s STEP: Saw pod success May 1 16:51:39.257: INFO: Pod "var-expansion-8057734c-3a27-4209-b1ac-00f2516413e9" satisfied condition "success or failure" May 1 16:51:39.260: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-8057734c-3a27-4209-b1ac-00f2516413e9 container dapi-container: STEP: delete the pod May 1 16:51:39.321: INFO: Waiting for pod var-expansion-8057734c-3a27-4209-b1ac-00f2516413e9 to disappear May 1 16:51:39.354: INFO: Pod var-expansion-8057734c-3a27-4209-b1ac-00f2516413e9 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:51:39.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-243" for this suite. May 1 16:51:45.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:51:45.444: INFO: namespace var-expansion-243 deletion completed in 6.085895218s • [SLOW TEST:10.379 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:51:45.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 1 16:51:45.501: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. May 1 16:51:46.146: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 1 16:51:48.670: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948706, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948706, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948706, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948706, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 16:51:50.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948706, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948706, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948706, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948706, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 16:51:53.301: INFO: Waited 620.010128ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:51:53.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9787" for this suite. May 1 16:52:00.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:52:00.229: INFO: namespace aggregator-9787 deletion completed in 6.434633074s • [SLOW TEST:14.784 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:52:00.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 1 16:52:04.833: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6e6f4056-5d42-4b24-9cf7-9bc2c35a37bf" May 1 16:52:04.833: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6e6f4056-5d42-4b24-9cf7-9bc2c35a37bf" in namespace "pods-2204" to be "terminated due to deadline exceeded" May 1 16:52:04.870: INFO: Pod "pod-update-activedeadlineseconds-6e6f4056-5d42-4b24-9cf7-9bc2c35a37bf": Phase="Running", Reason="", readiness=true. Elapsed: 36.647605ms May 1 16:52:06.874: INFO: Pod "pod-update-activedeadlineseconds-6e6f4056-5d42-4b24-9cf7-9bc2c35a37bf": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.040487659s May 1 16:52:06.874: INFO: Pod "pod-update-activedeadlineseconds-6e6f4056-5d42-4b24-9cf7-9bc2c35a37bf" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:52:06.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2204" for this suite. May 1 16:52:12.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:52:12.998: INFO: namespace pods-2204 deletion completed in 6.11927976s • [SLOW TEST:12.768 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:52:12.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-98bd9b24-bfe8-47a6-9022-a11e46b74b9c STEP: Creating a pod to test consume configMaps May 1 16:52:13.078: INFO: Waiting up to 5m0s for pod "pod-configmaps-3a4292a5-b362-440c-9e5e-81769236c218" in namespace "configmap-2153" to be "success or failure" May 1 16:52:13.098: INFO: Pod "pod-configmaps-3a4292a5-b362-440c-9e5e-81769236c218": Phase="Pending", Reason="", readiness=false. Elapsed: 20.278632ms May 1 16:52:15.103: INFO: Pod "pod-configmaps-3a4292a5-b362-440c-9e5e-81769236c218": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024952552s May 1 16:52:17.107: INFO: Pod "pod-configmaps-3a4292a5-b362-440c-9e5e-81769236c218": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029014888s STEP: Saw pod success May 1 16:52:17.107: INFO: Pod "pod-configmaps-3a4292a5-b362-440c-9e5e-81769236c218" satisfied condition "success or failure" May 1 16:52:17.110: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-3a4292a5-b362-440c-9e5e-81769236c218 container configmap-volume-test: STEP: delete the pod May 1 16:52:17.139: INFO: Waiting for pod pod-configmaps-3a4292a5-b362-440c-9e5e-81769236c218 to disappear May 1 16:52:17.148: INFO: Pod pod-configmaps-3a4292a5-b362-440c-9e5e-81769236c218 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:52:17.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2153" for this suite. May 1 16:52:23.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:52:23.243: INFO: namespace configmap-2153 deletion completed in 6.091801316s • [SLOW TEST:10.245 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:52:23.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-f3af9ff8-b450-415e-9a37-5d84b828af58 STEP: Creating configMap with name cm-test-opt-upd-fa621c0d-6337-440b-98c6-1147df6a00b8 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f3af9ff8-b450-415e-9a37-5d84b828af58 STEP: Updating configmap cm-test-opt-upd-fa621c0d-6337-440b-98c6-1147df6a00b8 STEP: Creating configMap with name cm-test-opt-create-8e5280fa-6c1a-459f-a479-41be21f81545 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:52:31.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8856" for this suite. May 1 16:52:55.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:52:55.623: INFO: namespace projected-8856 deletion completed in 24.155505272s • [SLOW TEST:32.379 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:52:55.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 1 16:52:56.286: INFO: Waiting up to 5m0s for pod "downwardapi-volume-32cc9de3-9e7b-4265-85e6-30d453817042" in namespace "downward-api-7717" to be "success or failure" May 1 16:52:56.328: INFO: Pod "downwardapi-volume-32cc9de3-9e7b-4265-85e6-30d453817042": Phase="Pending", Reason="", readiness=false. Elapsed: 42.008876ms May 1 16:52:58.333: INFO: Pod "downwardapi-volume-32cc9de3-9e7b-4265-85e6-30d453817042": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046251514s May 1 16:53:00.336: INFO: Pod "downwardapi-volume-32cc9de3-9e7b-4265-85e6-30d453817042": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049433037s May 1 16:53:02.339: INFO: Pod "downwardapi-volume-32cc9de3-9e7b-4265-85e6-30d453817042": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052666045s STEP: Saw pod success May 1 16:53:02.339: INFO: Pod "downwardapi-volume-32cc9de3-9e7b-4265-85e6-30d453817042" satisfied condition "success or failure" May 1 16:53:02.341: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-32cc9de3-9e7b-4265-85e6-30d453817042 container client-container: STEP: delete the pod May 1 16:53:02.378: INFO: Waiting for pod downwardapi-volume-32cc9de3-9e7b-4265-85e6-30d453817042 to disappear May 1 16:53:02.419: INFO: Pod downwardapi-volume-32cc9de3-9e7b-4265-85e6-30d453817042 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:53:02.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7717" for this suite. May 1 16:53:10.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:53:10.596: INFO: namespace downward-api-7717 deletion completed in 8.107342113s • [SLOW TEST:14.973 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:53:10.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-fbc57517-40a5-44a0-8867-e8d0c91a20e7 STEP: Creating a pod to test consume configMaps May 1 16:53:11.051: INFO: Waiting up to 5m0s for pod "pod-configmaps-0a119d21-8e99-4a5e-8b39-d7a5e97824c7" in namespace "configmap-4390" to be "success or failure" May 1 16:53:11.060: INFO: Pod "pod-configmaps-0a119d21-8e99-4a5e-8b39-d7a5e97824c7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.369767ms May 1 16:53:13.065: INFO: Pod "pod-configmaps-0a119d21-8e99-4a5e-8b39-d7a5e97824c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014284361s May 1 16:53:15.375: INFO: Pod "pod-configmaps-0a119d21-8e99-4a5e-8b39-d7a5e97824c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324426335s May 1 16:53:17.378: INFO: Pod "pod-configmaps-0a119d21-8e99-4a5e-8b39-d7a5e97824c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.327367879s STEP: Saw pod success May 1 16:53:17.378: INFO: Pod "pod-configmaps-0a119d21-8e99-4a5e-8b39-d7a5e97824c7" satisfied condition "success or failure" May 1 16:53:17.380: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-0a119d21-8e99-4a5e-8b39-d7a5e97824c7 container configmap-volume-test: STEP: delete the pod May 1 16:53:17.442: INFO: Waiting for pod pod-configmaps-0a119d21-8e99-4a5e-8b39-d7a5e97824c7 to disappear May 1 16:53:17.464: INFO: Pod pod-configmaps-0a119d21-8e99-4a5e-8b39-d7a5e97824c7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:53:17.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4390" for this suite. May 1 16:53:23.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:53:23.574: INFO: namespace configmap-4390 deletion completed in 6.107254901s • [SLOW TEST:12.977 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:53:23.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0501 16:54:03.962346 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 16:54:03.962: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:54:03.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9762" for this suite. May 1 16:54:17.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:54:18.052: INFO: namespace gc-9762 deletion completed in 14.086675248s • [SLOW TEST:54.478 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:54:18.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-9098 I0501 16:54:18.110267 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9098, replica count: 1 I0501 16:54:19.160661 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 16:54:20.160868 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 16:54:21.161063 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 16:54:22.161399 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 1 16:54:22.294: INFO: Created: latency-svc-pp9zz May 1 16:54:22.334: INFO: Got endpoints: latency-svc-pp9zz [72.863576ms] May 1 16:54:22.395: INFO: Created: latency-svc-lnwf5 May 1 16:54:22.419: INFO: Got endpoints: latency-svc-lnwf5 [84.580693ms] May 1 16:54:22.466: INFO: Created: latency-svc-8kckr May 1 16:54:22.469: INFO: Got endpoints: latency-svc-8kckr [134.320423ms] May 1 16:54:22.516: INFO: Created: latency-svc-hsj8r May 1 16:54:22.548: INFO: Got endpoints: latency-svc-hsj8r [213.408602ms] May 1 16:54:22.611: INFO: Created: latency-svc-pjtvf May 1 16:54:22.650: INFO: Got endpoints: latency-svc-pjtvf [314.845021ms] May 1 16:54:22.784: INFO: Created: latency-svc-k9rlb May 1 16:54:22.787: INFO: Got endpoints: latency-svc-k9rlb [452.923703ms] May 1 16:54:22.911: INFO: Created: latency-svc-trhfz May 1 16:54:22.935: INFO: Got endpoints: latency-svc-trhfz [600.651928ms] May 1 16:54:23.359: INFO: Created: latency-svc-5bwmv May 1 16:54:23.538: INFO: Got endpoints: latency-svc-5bwmv [1.203113031s] May 1 16:54:23.553: INFO: Created: latency-svc-9wqt6 May 1 16:54:23.600: INFO: Got endpoints: latency-svc-9wqt6 [1.265689112s] May 1 16:54:23.714: INFO: Created: latency-svc-9c6r9 May 1 16:54:23.734: INFO: Got endpoints: latency-svc-9c6r9 [1.399679735s] May 1 16:54:23.994: INFO: Created: latency-svc-2rnkx May 1 16:54:24.049: INFO: Got endpoints: latency-svc-2rnkx [1.714463407s] May 1 16:54:24.318: INFO: Created: latency-svc-4jwd9 May 1 16:54:24.320: INFO: Got endpoints: latency-svc-4jwd9 [1.985353039s] May 1 16:54:24.389: INFO: Created: latency-svc-fwznb May 1 16:54:24.412: INFO: Got endpoints: latency-svc-fwznb [2.077195632s] May 1 16:54:24.661: INFO: Created: latency-svc-vd4rp May 1 16:54:24.807: INFO: Got endpoints: latency-svc-vd4rp [2.472128869s] May 1 16:54:24.810: INFO: Created: latency-svc-g86cx May 1 16:54:24.815: INFO: Got endpoints: latency-svc-g86cx [2.480282176s] May 1 16:54:24.863: INFO: Created: latency-svc-2kbtb May 1 16:54:24.932: INFO: Got endpoints: latency-svc-2kbtb [2.597675534s] May 1 16:54:24.976: INFO: Created: latency-svc-mspdt May 1 16:54:24.993: INFO: Got endpoints: latency-svc-mspdt [2.573681106s] May 1 16:54:25.014: INFO: Created: latency-svc-gh22b May 1 16:54:25.076: INFO: Got endpoints: latency-svc-gh22b [2.607529412s] May 1 16:54:25.108: INFO: Created: latency-svc-tmw2h May 1 16:54:25.125: INFO: Got endpoints: latency-svc-tmw2h [2.577639785s] May 1 16:54:25.146: INFO: Created: latency-svc-5mjx8 May 1 16:54:25.161: INFO: Got endpoints: latency-svc-5mjx8 [2.511563102s] May 1 16:54:25.256: INFO: Created: latency-svc-dqjgp May 1 16:54:25.283: INFO: Got endpoints: latency-svc-dqjgp [2.495014896s] May 1 16:54:25.455: INFO: Created: latency-svc-hh6qs May 1 16:54:25.462: INFO: Got endpoints: latency-svc-hh6qs [2.526414172s] May 1 16:54:25.692: INFO: Created: latency-svc-r5s4d May 1 16:54:25.784: INFO: Got endpoints: latency-svc-r5s4d [2.245761731s] May 1 16:54:25.807: INFO: Created: latency-svc-dq94b May 1 16:54:25.835: INFO: Got endpoints: latency-svc-dq94b [2.234489517s] May 1 16:54:25.870: INFO: Created: latency-svc-6j6vg May 1 16:54:25.915: INFO: Got endpoints: latency-svc-6j6vg [2.180102608s] May 1 16:54:25.931: INFO: Created: latency-svc-5cj4j May 1 16:54:25.969: INFO: Got endpoints: latency-svc-5cj4j [1.919642433s] May 1 16:54:26.089: INFO: Created: latency-svc-8f4dg May 1 16:54:26.152: INFO: Got endpoints: latency-svc-8f4dg [1.832182286s] May 1 16:54:26.156: INFO: Created: latency-svc-rxx7s May 1 16:54:26.164: INFO: Got endpoints: latency-svc-rxx7s [1.751744777s] May 1 16:54:26.232: INFO: Created: latency-svc-bmt5f May 1 16:54:26.236: INFO: Got endpoints: latency-svc-bmt5f [1.429235434s] May 1 16:54:26.472: INFO: Created: latency-svc-26ksb May 1 16:54:26.483: INFO: Got endpoints: latency-svc-26ksb [1.668342637s] May 1 16:54:26.604: INFO: Created: latency-svc-9l7kw May 1 16:54:26.608: INFO: Got endpoints: latency-svc-9l7kw [1.675429077s] May 1 16:54:26.640: INFO: Created: latency-svc-2mc47 May 1 16:54:26.657: INFO: Got endpoints: latency-svc-2mc47 [1.664190629s] May 1 16:54:26.687: INFO: Created: latency-svc-lf7tt May 1 16:54:26.759: INFO: Got endpoints: latency-svc-lf7tt [1.682217363s] May 1 16:54:26.773: INFO: Created: latency-svc-qpk2h May 1 16:54:26.795: INFO: Got endpoints: latency-svc-qpk2h [1.669890749s] May 1 16:54:26.819: INFO: Created: latency-svc-x9kfp May 1 16:54:26.831: INFO: Got endpoints: latency-svc-x9kfp [1.669789247s] May 1 16:54:26.915: INFO: Created: latency-svc-ggjhd May 1 16:54:26.940: INFO: Got endpoints: latency-svc-ggjhd [1.657901453s] May 1 16:54:26.977: INFO: Created: latency-svc-m95l6 May 1 16:54:26.988: INFO: Got endpoints: latency-svc-m95l6 [1.526101656s] May 1 16:54:27.025: INFO: Created: latency-svc-pjskd May 1 16:54:27.055: INFO: Got endpoints: latency-svc-pjskd [1.270949774s] May 1 16:54:27.083: INFO: Created: latency-svc-d7lrl May 1 16:54:27.096: INFO: Got endpoints: latency-svc-d7lrl [1.261376352s] May 1 16:54:27.119: INFO: Created: latency-svc-zqncf May 1 16:54:27.154: INFO: Got endpoints: latency-svc-zqncf [1.239576776s] May 1 16:54:27.174: INFO: Created: latency-svc-8m69m May 1 16:54:27.187: INFO: Got endpoints: latency-svc-8m69m [1.217704855s] May 1 16:54:27.228: INFO: Created: latency-svc-plh7j May 1 16:54:27.242: INFO: Got endpoints: latency-svc-plh7j [1.089272976s] May 1 16:54:27.294: INFO: Created: latency-svc-dsjc4 May 1 16:54:27.296: INFO: Got endpoints: latency-svc-dsjc4 [1.132449191s] May 1 16:54:27.365: INFO: Created: latency-svc-x8k8b May 1 16:54:27.392: INFO: Got endpoints: latency-svc-x8k8b [1.155290681s] May 1 16:54:27.439: INFO: Created: latency-svc-x728d May 1 16:54:27.445: INFO: Got endpoints: latency-svc-x728d [961.834024ms] May 1 16:54:27.466: INFO: Created: latency-svc-w4v28 May 1 16:54:27.476: INFO: Got endpoints: latency-svc-w4v28 [867.962836ms] May 1 16:54:27.497: INFO: Created: latency-svc-j2dp7 May 1 16:54:27.500: INFO: Got endpoints: latency-svc-j2dp7 [842.879773ms] May 1 16:54:27.567: INFO: Created: latency-svc-gjpdc May 1 16:54:27.588: INFO: Got endpoints: latency-svc-gjpdc [829.414132ms] May 1 16:54:27.633: INFO: Created: latency-svc-qpbhr May 1 16:54:27.639: INFO: Got endpoints: latency-svc-qpbhr [843.222382ms] May 1 16:54:27.659: INFO: Created: latency-svc-5cvqg May 1 16:54:27.694: INFO: Got endpoints: latency-svc-5cvqg [862.501047ms] May 1 16:54:27.707: INFO: Created: latency-svc-j9rw8 May 1 16:54:27.730: INFO: Got endpoints: latency-svc-j9rw8 [789.265581ms] May 1 16:54:27.757: INFO: Created: latency-svc-h84ms May 1 16:54:27.771: INFO: Got endpoints: latency-svc-h84ms [783.667262ms] May 1 16:54:27.873: INFO: Created: latency-svc-8knz6 May 1 16:54:27.876: INFO: Got endpoints: latency-svc-8knz6 [821.184058ms] May 1 16:54:27.958: INFO: Created: latency-svc-97bq8 May 1 16:54:28.023: INFO: Got endpoints: latency-svc-97bq8 [926.056422ms] May 1 16:54:28.056: INFO: Created: latency-svc-lj5dc May 1 16:54:28.096: INFO: Got endpoints: latency-svc-lj5dc [941.910307ms] May 1 16:54:28.203: INFO: Created: latency-svc-qqvz9 May 1 16:54:28.216: INFO: Got endpoints: latency-svc-qqvz9 [1.029418803s] May 1 16:54:28.259: INFO: Created: latency-svc-pd5th May 1 16:54:28.276: INFO: Got endpoints: latency-svc-pd5th [1.034567646s] May 1 16:54:28.346: INFO: Created: latency-svc-dbl9n May 1 16:54:28.361: INFO: Got endpoints: latency-svc-dbl9n [1.064619947s] May 1 16:54:28.391: INFO: Created: latency-svc-qvdw9 May 1 16:54:28.415: INFO: Got endpoints: latency-svc-qvdw9 [1.02319308s] May 1 16:54:28.494: INFO: Created: latency-svc-ww7mb May 1 16:54:28.511: INFO: Got endpoints: latency-svc-ww7mb [1.065462525s] May 1 16:54:28.547: INFO: Created: latency-svc-s4sdf May 1 16:54:28.565: INFO: Got endpoints: latency-svc-s4sdf [1.089412557s] May 1 16:54:28.633: INFO: Created: latency-svc-hkdd2 May 1 16:54:28.635: INFO: Got endpoints: latency-svc-hkdd2 [1.135268697s] May 1 16:54:28.664: INFO: Created: latency-svc-5bdqr May 1 16:54:28.687: INFO: Got endpoints: latency-svc-5bdqr [1.098622839s] May 1 16:54:28.759: INFO: Created: latency-svc-hqn78 May 1 16:54:28.764: INFO: Got endpoints: latency-svc-hqn78 [1.125295753s] May 1 16:54:28.817: INFO: Created: latency-svc-vklzg May 1 16:54:28.830: INFO: Got endpoints: latency-svc-vklzg [1.136670002s] May 1 16:54:28.892: INFO: Created: latency-svc-4vnw7 May 1 16:54:28.897: INFO: Got endpoints: latency-svc-4vnw7 [1.167501184s] May 1 16:54:28.920: INFO: Created: latency-svc-9tl25 May 1 16:54:28.934: INFO: Got endpoints: latency-svc-9tl25 [1.162332753s] May 1 16:54:28.949: INFO: Created: latency-svc-8f7d2 May 1 16:54:28.963: INFO: Got endpoints: latency-svc-8f7d2 [1.087485809s] May 1 16:54:29.178: INFO: Created: latency-svc-h5rzn May 1 16:54:29.185: INFO: Got endpoints: latency-svc-h5rzn [1.162364747s] May 1 16:54:29.413: INFO: Created: latency-svc-vlgpb May 1 16:54:29.455: INFO: Got endpoints: latency-svc-vlgpb [1.358580481s] May 1 16:54:29.555: INFO: Created: latency-svc-2dhrv May 1 16:54:29.563: INFO: Got endpoints: latency-svc-2dhrv [1.346647919s] May 1 16:54:29.587: INFO: Created: latency-svc-92lwm May 1 16:54:29.599: INFO: Got endpoints: latency-svc-92lwm [1.322888775s] May 1 16:54:29.627: INFO: Created: latency-svc-gxffz May 1 16:54:29.643: INFO: Got endpoints: latency-svc-gxffz [1.282101255s] May 1 16:54:29.772: INFO: Created: latency-svc-jl7hd May 1 16:54:29.779: INFO: Got endpoints: latency-svc-jl7hd [1.364566145s] May 1 16:54:29.861: INFO: Created: latency-svc-n4s8s May 1 16:54:30.035: INFO: Got endpoints: latency-svc-n4s8s [1.523813296s] May 1 16:54:30.287: INFO: Created: latency-svc-9k5kf May 1 16:54:30.314: INFO: Got endpoints: latency-svc-9k5kf [1.748463443s] May 1 16:54:30.515: INFO: Created: latency-svc-lxx76 May 1 16:54:30.699: INFO: Got endpoints: latency-svc-lxx76 [2.063836563s] May 1 16:54:30.711: INFO: Created: latency-svc-vqmgh May 1 16:54:31.040: INFO: Got endpoints: latency-svc-vqmgh [2.352730308s] May 1 16:54:31.142: INFO: Created: latency-svc-6lff2 May 1 16:54:31.147: INFO: Got endpoints: latency-svc-6lff2 [2.383276427s] May 1 16:54:31.189: INFO: Created: latency-svc-gpf52 May 1 16:54:31.219: INFO: Got endpoints: latency-svc-gpf52 [2.388885639s] May 1 16:54:31.298: INFO: Created: latency-svc-g8w6v May 1 16:54:31.325: INFO: Got endpoints: latency-svc-g8w6v [2.427589602s] May 1 16:54:31.326: INFO: Created: latency-svc-cztzr May 1 16:54:31.350: INFO: Got endpoints: latency-svc-cztzr [2.416444924s] May 1 16:54:31.375: INFO: Created: latency-svc-27cjg May 1 16:54:31.388: INFO: Got endpoints: latency-svc-27cjg [2.424477319s] May 1 16:54:31.437: INFO: Created: latency-svc-lt26t May 1 16:54:31.442: INFO: Got endpoints: latency-svc-lt26t [2.256869555s] May 1 16:54:31.477: INFO: Created: latency-svc-6zj26 May 1 16:54:31.490: INFO: Got endpoints: latency-svc-6zj26 [2.035331999s] May 1 16:54:31.532: INFO: Created: latency-svc-2bhqm May 1 16:54:31.844: INFO: Got endpoints: latency-svc-2bhqm [2.281708034s] May 1 16:54:31.847: INFO: Created: latency-svc-sp9zv May 1 16:54:31.874: INFO: Got endpoints: latency-svc-sp9zv [2.275238078s] May 1 16:54:31.903: INFO: Created: latency-svc-rb2tf May 1 16:54:31.917: INFO: Got endpoints: latency-svc-rb2tf [2.27421253s] May 1 16:54:31.938: INFO: Created: latency-svc-xz4fn May 1 16:54:32.004: INFO: Got endpoints: latency-svc-xz4fn [2.224699814s] May 1 16:54:32.007: INFO: Created: latency-svc-vtthj May 1 16:54:32.031: INFO: Got endpoints: latency-svc-vtthj [1.996244734s] May 1 16:54:32.079: INFO: Created: latency-svc-ngtt7 May 1 16:54:32.148: INFO: Got endpoints: latency-svc-ngtt7 [1.834343411s] May 1 16:54:32.159: INFO: Created: latency-svc-52rcd May 1 16:54:32.175: INFO: Got endpoints: latency-svc-52rcd [1.476144471s] May 1 16:54:32.204: INFO: Created: latency-svc-cbc5r May 1 16:54:32.218: INFO: Got endpoints: latency-svc-cbc5r [1.17784065s] May 1 16:54:32.240: INFO: Created: latency-svc-x4pv2 May 1 16:54:32.298: INFO: Got endpoints: latency-svc-x4pv2 [1.150520962s] May 1 16:54:32.302: INFO: Created: latency-svc-n9gkb May 1 16:54:32.308: INFO: Got endpoints: latency-svc-n9gkb [1.088278349s] May 1 16:54:32.333: INFO: Created: latency-svc-vbprh May 1 16:54:32.350: INFO: Got endpoints: latency-svc-vbprh [1.025388971s] May 1 16:54:32.370: INFO: Created: latency-svc-hqrrt May 1 16:54:32.387: INFO: Got endpoints: latency-svc-hqrrt [1.036281587s] May 1 16:54:32.448: INFO: Created: latency-svc-zhgsm May 1 16:54:32.453: INFO: Got endpoints: latency-svc-zhgsm [1.065230068s] May 1 16:54:32.499: INFO: Created: latency-svc-x4wlj May 1 16:54:32.508: INFO: Got endpoints: latency-svc-x4wlj [1.065511826s] May 1 16:54:32.547: INFO: Created: latency-svc-7292t May 1 16:54:32.591: INFO: Got endpoints: latency-svc-7292t [1.101126929s] May 1 16:54:32.593: INFO: Created: latency-svc-klfx5 May 1 16:54:32.609: INFO: Got endpoints: latency-svc-klfx5 [764.661917ms] May 1 16:54:32.633: INFO: Created: latency-svc-52m8z May 1 16:54:32.652: INFO: Got endpoints: latency-svc-52m8z [777.491571ms] May 1 16:54:32.673: INFO: Created: latency-svc-kvxmw May 1 16:54:32.688: INFO: Got endpoints: latency-svc-kvxmw [771.02517ms] May 1 16:54:32.761: INFO: Created: latency-svc-cthj5 May 1 16:54:32.773: INFO: Got endpoints: latency-svc-cthj5 [768.249797ms] May 1 16:54:32.813: INFO: Created: latency-svc-rqr28 May 1 16:54:32.827: INFO: Got endpoints: latency-svc-rqr28 [795.785433ms] May 1 16:54:32.849: INFO: Created: latency-svc-jt8g5 May 1 16:54:32.879: INFO: Got endpoints: latency-svc-jt8g5 [730.11536ms] May 1 16:54:32.905: INFO: Created: latency-svc-6hzb7 May 1 16:54:32.912: INFO: Got endpoints: latency-svc-6hzb7 [736.404287ms] May 1 16:54:33.184: INFO: Created: latency-svc-98njz May 1 16:54:33.316: INFO: Got endpoints: latency-svc-98njz [1.098374833s] May 1 16:54:33.329: INFO: Created: latency-svc-f29ls May 1 16:54:33.376: INFO: Got endpoints: latency-svc-f29ls [1.077557078s] May 1 16:54:33.454: INFO: Created: latency-svc-wjdn6 May 1 16:54:33.457: INFO: Got endpoints: latency-svc-wjdn6 [1.14951734s] May 1 16:54:33.489: INFO: Created: latency-svc-4jr74 May 1 16:54:33.506: INFO: Got endpoints: latency-svc-4jr74 [1.15525698s] May 1 16:54:33.543: INFO: Created: latency-svc-nl68r May 1 16:54:33.603: INFO: Got endpoints: latency-svc-nl68r [1.21662207s] May 1 16:54:33.605: INFO: Created: latency-svc-jpm24 May 1 16:54:33.635: INFO: Got endpoints: latency-svc-jpm24 [1.181760143s] May 1 16:54:33.682: INFO: Created: latency-svc-mw6kv May 1 16:54:33.699: INFO: Got endpoints: latency-svc-mw6kv [1.191083738s] May 1 16:54:33.748: INFO: Created: latency-svc-qbd25 May 1 16:54:33.753: INFO: Got endpoints: latency-svc-qbd25 [1.16163682s] May 1 16:54:33.772: INFO: Created: latency-svc-dqv76 May 1 16:54:33.802: INFO: Got endpoints: latency-svc-dqv76 [1.192240865s] May 1 16:54:33.827: INFO: Created: latency-svc-w4rbn May 1 16:54:33.843: INFO: Got endpoints: latency-svc-w4rbn [1.191394393s] May 1 16:54:33.879: INFO: Created: latency-svc-m87rc May 1 16:54:33.885: INFO: Got endpoints: latency-svc-m87rc [1.196965507s] May 1 16:54:33.928: INFO: Created: latency-svc-fftw5 May 1 16:54:33.958: INFO: Got endpoints: latency-svc-fftw5 [1.185380035s] May 1 16:54:34.011: INFO: Created: latency-svc-m67jt May 1 16:54:34.013: INFO: Got endpoints: latency-svc-m67jt [1.186258477s] May 1 16:54:34.037: INFO: Created: latency-svc-p2zwx May 1 16:54:34.054: INFO: Got endpoints: latency-svc-p2zwx [1.175757764s] May 1 16:54:34.073: INFO: Created: latency-svc-4vdgk May 1 16:54:34.148: INFO: Got endpoints: latency-svc-4vdgk [1.235935147s] May 1 16:54:34.173: INFO: Created: latency-svc-tr8j7 May 1 16:54:34.193: INFO: Got endpoints: latency-svc-tr8j7 [876.738236ms] May 1 16:54:34.217: INFO: Created: latency-svc-hgqqs May 1 16:54:34.236: INFO: Got endpoints: latency-svc-hgqqs [860.027458ms] May 1 16:54:34.299: INFO: Created: latency-svc-pjzsd May 1 16:54:34.302: INFO: Got endpoints: latency-svc-pjzsd [844.4233ms] May 1 16:54:34.383: INFO: Created: latency-svc-626s8 May 1 16:54:34.442: INFO: Got endpoints: latency-svc-626s8 [935.802362ms] May 1 16:54:34.529: INFO: Created: latency-svc-ltk64 May 1 16:54:34.609: INFO: Got endpoints: latency-svc-ltk64 [1.00597425s] May 1 16:54:34.641: INFO: Created: latency-svc-9l2p4 May 1 16:54:34.668: INFO: Got endpoints: latency-svc-9l2p4 [1.032488924s] May 1 16:54:34.709: INFO: Created: latency-svc-gt7hs May 1 16:54:34.767: INFO: Got endpoints: latency-svc-gt7hs [1.068180043s] May 1 16:54:34.768: INFO: Created: latency-svc-xmww2 May 1 16:54:34.815: INFO: Got endpoints: latency-svc-xmww2 [1.062122379s] May 1 16:54:34.909: INFO: Created: latency-svc-xbcpl May 1 16:54:34.914: INFO: Got endpoints: latency-svc-xbcpl [1.112055166s] May 1 16:54:34.936: INFO: Created: latency-svc-kwf96 May 1 16:54:34.950: INFO: Got endpoints: latency-svc-kwf96 [1.106405675s] May 1 16:54:34.997: INFO: Created: latency-svc-vb9rq May 1 16:54:35.058: INFO: Got endpoints: latency-svc-vb9rq [1.172853246s] May 1 16:54:35.062: INFO: Created: latency-svc-x5rtb May 1 16:54:35.065: INFO: Got endpoints: latency-svc-x5rtb [1.106543114s] May 1 16:54:35.085: INFO: Created: latency-svc-v7mzt May 1 16:54:35.101: INFO: Got endpoints: latency-svc-v7mzt [1.088056733s] May 1 16:54:35.134: INFO: Created: latency-svc-6lzkj May 1 16:54:35.149: INFO: Got endpoints: latency-svc-6lzkj [1.095037491s] May 1 16:54:35.184: INFO: Created: latency-svc-85skk May 1 16:54:35.198: INFO: Got endpoints: latency-svc-85skk [1.050041166s] May 1 16:54:35.241: INFO: Created: latency-svc-9wbfn May 1 16:54:35.277: INFO: Got endpoints: latency-svc-9wbfn [1.084568954s] May 1 16:54:35.334: INFO: Created: latency-svc-jkt9k May 1 16:54:35.360: INFO: Got endpoints: latency-svc-jkt9k [1.124627623s] May 1 16:54:35.404: INFO: Created: latency-svc-jqt4m May 1 16:54:35.466: INFO: Got endpoints: latency-svc-jqt4m [1.164458599s] May 1 16:54:35.504: INFO: Created: latency-svc-qggx2 May 1 16:54:35.517: INFO: Got endpoints: latency-svc-qggx2 [1.075141759s] May 1 16:54:35.538: INFO: Created: latency-svc-c9zwv May 1 16:54:35.547: INFO: Got endpoints: latency-svc-c9zwv [937.188473ms] May 1 16:54:35.652: INFO: Created: latency-svc-978z6 May 1 16:54:35.665: INFO: Got endpoints: latency-svc-978z6 [996.970232ms] May 1 16:54:35.812: INFO: Created: latency-svc-cvg6l May 1 16:54:35.829: INFO: Got endpoints: latency-svc-cvg6l [1.062063057s] May 1 16:54:35.850: INFO: Created: latency-svc-cxbjm May 1 16:54:35.865: INFO: Got endpoints: latency-svc-cxbjm [1.049952847s] May 1 16:54:35.884: INFO: Created: latency-svc-mhhck May 1 16:54:35.933: INFO: Got endpoints: latency-svc-mhhck [1.018885445s] May 1 16:54:35.946: INFO: Created: latency-svc-znvrj May 1 16:54:35.962: INFO: Got endpoints: latency-svc-znvrj [1.011708502s] May 1 16:54:35.982: INFO: Created: latency-svc-lqq5c May 1 16:54:36.010: INFO: Got endpoints: latency-svc-lqq5c [951.79298ms] May 1 16:54:36.071: INFO: Created: latency-svc-xc8tf May 1 16:54:36.076: INFO: Got endpoints: latency-svc-xc8tf [1.011186889s] May 1 16:54:36.100: INFO: Created: latency-svc-9dd48 May 1 16:54:36.119: INFO: Got endpoints: latency-svc-9dd48 [1.017195409s] May 1 16:54:36.288: INFO: Created: latency-svc-xkksw May 1 16:54:36.322: INFO: Got endpoints: latency-svc-xkksw [1.172246948s] May 1 16:54:36.503: INFO: Created: latency-svc-hddj7 May 1 16:54:36.509: INFO: Got endpoints: latency-svc-hddj7 [1.310953086s] May 1 16:54:36.544: INFO: Created: latency-svc-xnn9l May 1 16:54:36.593: INFO: Got endpoints: latency-svc-xnn9l [1.315869494s] May 1 16:54:36.711: INFO: Created: latency-svc-6j88z May 1 16:54:36.731: INFO: Got endpoints: latency-svc-6j88z [1.370647954s] May 1 16:54:36.790: INFO: Created: latency-svc-2m5x6 May 1 16:54:36.825: INFO: Got endpoints: latency-svc-2m5x6 [1.358360857s] May 1 16:54:36.911: INFO: Created: latency-svc-v98lp May 1 16:54:36.923: INFO: Got endpoints: latency-svc-v98lp [1.406320816s] May 1 16:54:36.989: INFO: Created: latency-svc-4cfrv May 1 16:54:37.038: INFO: Got endpoints: latency-svc-4cfrv [1.491535359s] May 1 16:54:37.132: INFO: Created: latency-svc-cwk74 May 1 16:54:37.135: INFO: Got endpoints: latency-svc-cwk74 [1.4701189s] May 1 16:54:37.800: INFO: Created: latency-svc-kppph May 1 16:54:37.979: INFO: Got endpoints: latency-svc-kppph [2.150114634s] May 1 16:54:38.126: INFO: Created: latency-svc-h26vj May 1 16:54:38.141: INFO: Got endpoints: latency-svc-h26vj [2.275829261s] May 1 16:54:38.214: INFO: Created: latency-svc-tq7pk May 1 16:54:38.274: INFO: Got endpoints: latency-svc-tq7pk [2.341155286s] May 1 16:54:38.321: INFO: Created: latency-svc-l2gzt May 1 16:54:38.326: INFO: Got endpoints: latency-svc-l2gzt [2.364383677s] May 1 16:54:38.460: INFO: Created: latency-svc-dq9sz May 1 16:54:38.463: INFO: Got endpoints: latency-svc-dq9sz [2.452833306s] May 1 16:54:38.542: INFO: Created: latency-svc-8878c May 1 16:54:38.549: INFO: Got endpoints: latency-svc-8878c [222.789683ms] May 1 16:54:38.628: INFO: Created: latency-svc-kjrr7 May 1 16:54:38.645: INFO: Got endpoints: latency-svc-kjrr7 [2.569218261s] May 1 16:54:38.683: INFO: Created: latency-svc-sx98m May 1 16:54:38.706: INFO: Got endpoints: latency-svc-sx98m [2.587208256s] May 1 16:54:38.783: INFO: Created: latency-svc-kvnwp May 1 16:54:38.851: INFO: Got endpoints: latency-svc-kvnwp [2.529143554s] May 1 16:54:38.852: INFO: Created: latency-svc-z9hft May 1 16:54:38.874: INFO: Got endpoints: latency-svc-z9hft [2.365141512s] May 1 16:54:39.005: INFO: Created: latency-svc-lcqmd May 1 16:54:39.009: INFO: Got endpoints: latency-svc-lcqmd [2.415838019s] May 1 16:54:39.196: INFO: Created: latency-svc-fhd5p May 1 16:54:39.264: INFO: Got endpoints: latency-svc-fhd5p [2.533199516s] May 1 16:54:39.406: INFO: Created: latency-svc-dlbjw May 1 16:54:39.414: INFO: Got endpoints: latency-svc-dlbjw [2.589097872s] May 1 16:54:39.712: INFO: Created: latency-svc-5dszw May 1 16:54:39.809: INFO: Got endpoints: latency-svc-5dszw [2.886304345s] May 1 16:54:40.278: INFO: Created: latency-svc-494dp May 1 16:54:40.283: INFO: Got endpoints: latency-svc-494dp [3.244437347s] May 1 16:54:40.564: INFO: Created: latency-svc-zq2hq May 1 16:54:40.687: INFO: Got endpoints: latency-svc-zq2hq [3.552271839s] May 1 16:54:40.751: INFO: Created: latency-svc-x8grt May 1 16:54:40.897: INFO: Got endpoints: latency-svc-x8grt [2.917393502s] May 1 16:54:40.984: INFO: Created: latency-svc-zj889 May 1 16:54:41.052: INFO: Got endpoints: latency-svc-zj889 [2.910927742s] May 1 16:54:41.148: INFO: Created: latency-svc-q7mf7 May 1 16:54:41.298: INFO: Got endpoints: latency-svc-q7mf7 [3.024137112s] May 1 16:54:41.301: INFO: Created: latency-svc-d2pgw May 1 16:54:41.308: INFO: Got endpoints: latency-svc-d2pgw [2.844809879s] May 1 16:54:41.466: INFO: Created: latency-svc-wl6qs May 1 16:54:41.472: INFO: Got endpoints: latency-svc-wl6qs [2.923134282s] May 1 16:54:41.525: INFO: Created: latency-svc-dwlzt May 1 16:54:41.663: INFO: Got endpoints: latency-svc-dwlzt [3.018007119s] May 1 16:54:41.712: INFO: Created: latency-svc-94fjk May 1 16:54:41.741: INFO: Got endpoints: latency-svc-94fjk [3.034990671s] May 1 16:54:41.819: INFO: Created: latency-svc-6cqkh May 1 16:54:41.862: INFO: Got endpoints: latency-svc-6cqkh [3.011426375s] May 1 16:54:41.993: INFO: Created: latency-svc-7tcxw May 1 16:54:41.995: INFO: Got endpoints: latency-svc-7tcxw [3.121196157s] May 1 16:54:42.172: INFO: Created: latency-svc-wddbm May 1 16:54:42.176: INFO: Got endpoints: latency-svc-wddbm [3.166643534s] May 1 16:54:42.270: INFO: Created: latency-svc-ntrvd May 1 16:54:42.376: INFO: Got endpoints: latency-svc-ntrvd [3.111481712s] May 1 16:54:42.449: INFO: Created: latency-svc-2frgj May 1 16:54:42.561: INFO: Got endpoints: latency-svc-2frgj [3.147548767s] May 1 16:54:42.762: INFO: Created: latency-svc-tdmxr May 1 16:54:42.773: INFO: Got endpoints: latency-svc-tdmxr [2.963897595s] May 1 16:54:42.830: INFO: Created: latency-svc-lq9dh May 1 16:54:42.851: INFO: Got endpoints: latency-svc-lq9dh [2.568631831s] May 1 16:54:42.919: INFO: Created: latency-svc-ccdxm May 1 16:54:43.113: INFO: Got endpoints: latency-svc-ccdxm [2.425280239s] May 1 16:54:43.164: INFO: Created: latency-svc-cl6b4 May 1 16:54:43.298: INFO: Got endpoints: latency-svc-cl6b4 [2.401378513s] May 1 16:54:43.339: INFO: Created: latency-svc-g9629 May 1 16:54:43.343: INFO: Got endpoints: latency-svc-g9629 [2.290458856s] May 1 16:54:43.374: INFO: Created: latency-svc-f9tvc May 1 16:54:43.398: INFO: Got endpoints: latency-svc-f9tvc [2.099354835s] May 1 16:54:43.460: INFO: Created: latency-svc-g9qh6 May 1 16:54:43.488: INFO: Got endpoints: latency-svc-g9qh6 [2.179917014s] May 1 16:54:43.520: INFO: Created: latency-svc-r7wp6 May 1 16:54:43.634: INFO: Got endpoints: latency-svc-r7wp6 [2.161468893s] May 1 16:54:43.638: INFO: Created: latency-svc-qcgrk May 1 16:54:43.656: INFO: Got endpoints: latency-svc-qcgrk [1.992474643s] May 1 16:54:43.682: INFO: Created: latency-svc-sb7vg May 1 16:54:43.699: INFO: Got endpoints: latency-svc-sb7vg [1.957949091s] May 1 16:54:43.783: INFO: Created: latency-svc-r42p2 May 1 16:54:43.789: INFO: Got endpoints: latency-svc-r42p2 [1.926693067s] May 1 16:54:44.234: INFO: Created: latency-svc-sfflr May 1 16:54:44.274: INFO: Got endpoints: latency-svc-sfflr [2.27871079s] May 1 16:54:44.317: INFO: Created: latency-svc-4rg59 May 1 16:54:44.322: INFO: Got endpoints: latency-svc-4rg59 [2.145742159s] May 1 16:54:44.400: INFO: Created: latency-svc-2xt97 May 1 16:54:44.431: INFO: Got endpoints: latency-svc-2xt97 [2.054712391s] May 1 16:54:44.527: INFO: Created: latency-svc-tmdlx May 1 16:54:44.766: INFO: Got endpoints: latency-svc-tmdlx [2.204659731s] May 1 16:54:44.766: INFO: Latencies: [84.580693ms 134.320423ms 213.408602ms 222.789683ms 314.845021ms 452.923703ms 600.651928ms 730.11536ms 736.404287ms 764.661917ms 768.249797ms 771.02517ms 777.491571ms 783.667262ms 789.265581ms 795.785433ms 821.184058ms 829.414132ms 842.879773ms 843.222382ms 844.4233ms 860.027458ms 862.501047ms 867.962836ms 876.738236ms 926.056422ms 935.802362ms 937.188473ms 941.910307ms 951.79298ms 961.834024ms 996.970232ms 1.00597425s 1.011186889s 1.011708502s 1.017195409s 1.018885445s 1.02319308s 1.025388971s 1.029418803s 1.032488924s 1.034567646s 1.036281587s 1.049952847s 1.050041166s 1.062063057s 1.062122379s 1.064619947s 1.065230068s 1.065462525s 1.065511826s 1.068180043s 1.075141759s 1.077557078s 1.084568954s 1.087485809s 1.088056733s 1.088278349s 1.089272976s 1.089412557s 1.095037491s 1.098374833s 1.098622839s 1.101126929s 1.106405675s 1.106543114s 1.112055166s 1.124627623s 1.125295753s 1.132449191s 1.135268697s 1.136670002s 1.14951734s 1.150520962s 1.15525698s 1.155290681s 1.16163682s 1.162332753s 1.162364747s 1.164458599s 1.167501184s 1.172246948s 1.172853246s 1.175757764s 1.17784065s 1.181760143s 1.185380035s 1.186258477s 1.191083738s 1.191394393s 1.192240865s 1.196965507s 1.203113031s 1.21662207s 1.217704855s 1.235935147s 1.239576776s 1.261376352s 1.265689112s 1.270949774s 1.282101255s 1.310953086s 1.315869494s 1.322888775s 1.346647919s 1.358360857s 1.358580481s 1.364566145s 1.370647954s 1.399679735s 1.406320816s 1.429235434s 1.4701189s 1.476144471s 1.491535359s 1.523813296s 1.526101656s 1.657901453s 1.664190629s 1.668342637s 1.669789247s 1.669890749s 1.675429077s 1.682217363s 1.714463407s 1.748463443s 1.751744777s 1.832182286s 1.834343411s 1.919642433s 1.926693067s 1.957949091s 1.985353039s 1.992474643s 1.996244734s 2.035331999s 2.054712391s 2.063836563s 2.077195632s 2.099354835s 2.145742159s 2.150114634s 2.161468893s 2.179917014s 2.180102608s 2.204659731s 2.224699814s 2.234489517s 2.245761731s 2.256869555s 2.27421253s 2.275238078s 2.275829261s 2.27871079s 2.281708034s 2.290458856s 2.341155286s 2.352730308s 2.364383677s 2.365141512s 2.383276427s 2.388885639s 2.401378513s 2.415838019s 2.416444924s 2.424477319s 2.425280239s 2.427589602s 2.452833306s 2.472128869s 2.480282176s 2.495014896s 2.511563102s 2.526414172s 2.529143554s 2.533199516s 2.568631831s 2.569218261s 2.573681106s 2.577639785s 2.587208256s 2.589097872s 2.597675534s 2.607529412s 2.844809879s 2.886304345s 2.910927742s 2.917393502s 2.923134282s 2.963897595s 3.011426375s 3.018007119s 3.024137112s 3.034990671s 3.111481712s 3.121196157s 3.147548767s 3.166643534s 3.244437347s 3.552271839s] May 1 16:54:44.766: INFO: 50 %ile: 1.282101255s May 1 16:54:44.766: INFO: 90 %ile: 2.587208256s May 1 16:54:44.766: INFO: 99 %ile: 3.244437347s May 1 16:54:44.766: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:54:44.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9098" for this suite. May 1 16:55:16.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:55:17.067: INFO: namespace svc-latency-9098 deletion completed in 32.263721702s • [SLOW TEST:59.014 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:55:17.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 1 16:55:17.178: INFO: namespace kubectl-667 May 1 16:55:17.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-667' May 1 16:55:20.548: INFO: stderr: "" May 1 16:55:20.548: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 1 16:55:21.552: INFO: Selector matched 1 pods for map[app:redis] May 1 16:55:21.552: INFO: Found 0 / 1 May 1 16:55:22.753: INFO: Selector matched 1 pods for map[app:redis] May 1 16:55:22.753: INFO: Found 0 / 1 May 1 16:55:23.712: INFO: Selector matched 1 pods for map[app:redis] May 1 16:55:23.713: INFO: Found 0 / 1 May 1 16:55:24.568: INFO: Selector matched 1 pods for map[app:redis] May 1 16:55:24.568: INFO: Found 1 / 1 May 1 16:55:24.568: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 1 16:55:24.604: INFO: Selector matched 1 pods for map[app:redis] May 1 16:55:24.604: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 1 16:55:24.604: INFO: wait on redis-master startup in kubectl-667 May 1 16:55:24.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9z5pb redis-master --namespace=kubectl-667' May 1 16:55:24.759: INFO: stderr: "" May 1 16:55:24.760: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 May 16:55:24.176 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 May 16:55:24.176 # Server started, Redis version 3.2.12\n1:M 01 May 16:55:24.176 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 May 16:55:24.176 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 1 16:55:24.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-667' May 1 16:55:24.924: INFO: stderr: "" May 1 16:55:24.924: INFO: stdout: "service/rm2 exposed\n" May 1 16:55:24.956: INFO: Service rm2 in namespace kubectl-667 found. STEP: exposing service May 1 16:55:26.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-667' May 1 16:55:27.124: INFO: stderr: "" May 1 16:55:27.124: INFO: stdout: "service/rm3 exposed\n" May 1 16:55:27.191: INFO: Service rm3 in namespace kubectl-667 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:55:29.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-667" for this suite. May 1 16:56:08.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:56:08.318: INFO: namespace kubectl-667 deletion completed in 39.115251835s • [SLOW TEST:51.251 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:56:08.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-dgfwl in namespace proxy-4064 I0501 16:56:10.115812 6 runners.go:180] Created replication controller with name: proxy-service-dgfwl, namespace: proxy-4064, replica count: 1 I0501 16:56:11.166242 6 runners.go:180] proxy-service-dgfwl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 16:56:12.166453 6 runners.go:180] proxy-service-dgfwl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 16:56:13.166688 6 runners.go:180] proxy-service-dgfwl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 16:56:14.166896 6 runners.go:180] proxy-service-dgfwl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 16:56:15.167130 6 runners.go:180] proxy-service-dgfwl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 16:56:16.167375 6 runners.go:180] proxy-service-dgfwl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 16:56:17.167614 6 runners.go:180] proxy-service-dgfwl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 16:56:18.167798 6 runners.go:180] proxy-service-dgfwl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 16:56:19.168007 6 runners.go:180] proxy-service-dgfwl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 16:56:20.168253 6 runners.go:180] proxy-service-dgfwl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 16:56:21.168445 6 runners.go:180] proxy-service-dgfwl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 16:56:22.168698 6 runners.go:180] proxy-service-dgfwl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 1 16:56:22.172: INFO: setup took 13.179682059s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 1 16:56:22.178: INFO: (0) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:1080/proxy/: test<... (200; 6.043902ms) May 1 16:56:22.179: INFO: (0) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 7.64144ms) May 1 16:56:22.180: INFO: (0) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 7.909357ms) May 1 16:56:22.180: INFO: (0) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 8.051835ms) May 1 16:56:22.180: INFO: (0) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:1080/proxy/: ... (200; 8.278415ms) May 1 16:56:22.180: INFO: (0) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx/proxy/: test (200; 8.624597ms) May 1 16:56:22.180: INFO: (0) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 8.679838ms) May 1 16:56:22.181: INFO: (0) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname2/proxy/: bar (200; 9.65729ms) May 1 16:56:22.182: INFO: (0) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname2/proxy/: bar (200; 9.977037ms) May 1 16:56:22.182: INFO: (0) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname1/proxy/: foo (200; 10.158454ms) May 1 16:56:22.182: INFO: (0) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname1/proxy/: foo (200; 10.334158ms) May 1 16:56:22.206: INFO: (0) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname1/proxy/: tls baz (200; 34.601401ms) May 1 16:56:22.206: INFO: (0) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:462/proxy/: tls qux (200; 34.76799ms) May 1 16:56:22.206: INFO: (0) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname2/proxy/: tls qux (200; 34.793595ms) May 1 16:56:22.206: INFO: (0) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:443/proxy/: test (200; 4.941505ms) May 1 16:56:22.212: INFO: (1) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:462/proxy/: tls qux (200; 5.157868ms) May 1 16:56:22.215: INFO: (1) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname2/proxy/: bar (200; 8.769015ms) May 1 16:56:22.216: INFO: (1) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:460/proxy/: tls baz (200; 8.813968ms) May 1 16:56:22.216: INFO: (1) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname1/proxy/: foo (200; 9.745166ms) May 1 16:56:22.216: INFO: (1) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname2/proxy/: bar (200; 9.739998ms) May 1 16:56:22.216: INFO: (1) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:443/proxy/: ... (200; 9.740254ms) May 1 16:56:22.217: INFO: (1) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname1/proxy/: tls baz (200; 9.787874ms) May 1 16:56:22.217: INFO: (1) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname1/proxy/: foo (200; 9.82011ms) May 1 16:56:22.217: INFO: (1) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:1080/proxy/: test<... (200; 9.786701ms) May 1 16:56:22.217: INFO: (1) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 9.914604ms) May 1 16:56:22.217: INFO: (1) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 9.942279ms) May 1 16:56:22.217: INFO: (1) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname2/proxy/: tls qux (200; 10.389652ms) May 1 16:56:22.217: INFO: (1) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 10.39475ms) May 1 16:56:22.222: INFO: (2) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:443/proxy/: test (200; 6.194235ms) May 1 16:56:22.224: INFO: (2) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:1080/proxy/: ... (200; 6.196687ms) May 1 16:56:22.224: INFO: (2) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname2/proxy/: tls qux (200; 6.427565ms) May 1 16:56:22.224: INFO: (2) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:1080/proxy/: test<... (200; 6.905451ms) May 1 16:56:22.224: INFO: (2) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname2/proxy/: bar (200; 6.692474ms) May 1 16:56:22.224: INFO: (2) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 6.924539ms) May 1 16:56:22.224: INFO: (2) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 6.915487ms) May 1 16:56:22.224: INFO: (2) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname1/proxy/: tls baz (200; 6.916819ms) May 1 16:56:22.227: INFO: (3) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:1080/proxy/: ... (200; 2.864245ms) May 1 16:56:22.227: INFO: (3) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 2.955457ms) May 1 16:56:22.227: INFO: (3) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:1080/proxy/: test<... (200; 2.866454ms) May 1 16:56:22.229: INFO: (3) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:460/proxy/: tls baz (200; 4.920328ms) May 1 16:56:22.229: INFO: (3) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 5.066854ms) May 1 16:56:22.230: INFO: (3) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 5.861021ms) May 1 16:56:22.231: INFO: (3) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 6.412698ms) May 1 16:56:22.231: INFO: (3) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:443/proxy/: test (200; 6.503525ms) May 1 16:56:22.231: INFO: (3) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:462/proxy/: tls qux (200; 6.700443ms) May 1 16:56:22.232: INFO: (3) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname2/proxy/: bar (200; 7.277793ms) May 1 16:56:22.232: INFO: (3) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname2/proxy/: tls qux (200; 7.703773ms) May 1 16:56:22.232: INFO: (3) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname2/proxy/: bar (200; 7.977397ms) May 1 16:56:22.232: INFO: (3) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname1/proxy/: foo (200; 7.994499ms) May 1 16:56:22.232: INFO: (3) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname1/proxy/: foo (200; 7.918643ms) May 1 16:56:22.232: INFO: (3) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname1/proxy/: tls baz (200; 8.029004ms) May 1 16:56:22.238: INFO: (4) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:462/proxy/: tls qux (200; 5.690136ms) May 1 16:56:22.238: INFO: (4) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 5.664825ms) May 1 16:56:22.238: INFO: (4) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:1080/proxy/: test<... (200; 5.555406ms) May 1 16:56:22.238: INFO: (4) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx/proxy/: test (200; 5.685004ms) May 1 16:56:22.240: INFO: (4) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 7.084291ms) May 1 16:56:22.240: INFO: (4) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 6.974447ms) May 1 16:56:22.240: INFO: (4) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:460/proxy/: tls baz (200; 7.128172ms) May 1 16:56:22.240: INFO: (4) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 7.435138ms) May 1 16:56:22.240: INFO: (4) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname2/proxy/: tls qux (200; 7.395625ms) May 1 16:56:22.240: INFO: (4) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname2/proxy/: bar (200; 7.43473ms) May 1 16:56:22.240: INFO: (4) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname1/proxy/: foo (200; 7.401872ms) May 1 16:56:22.240: INFO: (4) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:1080/proxy/: ... (200; 7.548964ms) May 1 16:56:22.240: INFO: (4) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname1/proxy/: foo (200; 7.66888ms) May 1 16:56:22.240: INFO: (4) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname1/proxy/: tls baz (200; 7.680933ms) May 1 16:56:22.240: INFO: (4) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:443/proxy/: test (200; 5.969689ms) May 1 16:56:22.247: INFO: (5) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:1080/proxy/: ... (200; 5.994981ms) May 1 16:56:22.247: INFO: (5) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 5.993729ms) May 1 16:56:22.247: INFO: (5) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 6.041904ms) May 1 16:56:22.247: INFO: (5) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 6.071337ms) May 1 16:56:22.247: INFO: (5) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:1080/proxy/: test<... (200; 6.151281ms) May 1 16:56:22.248: INFO: (5) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname1/proxy/: foo (200; 7.342003ms) May 1 16:56:22.248: INFO: (5) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname2/proxy/: bar (200; 7.543954ms) May 1 16:56:22.248: INFO: (5) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname2/proxy/: tls qux (200; 7.629027ms) May 1 16:56:22.248: INFO: (5) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname1/proxy/: tls baz (200; 7.67706ms) May 1 16:56:22.248: INFO: (5) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname2/proxy/: bar (200; 7.691532ms) May 1 16:56:22.248: INFO: (5) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname1/proxy/: foo (200; 7.655836ms) May 1 16:56:22.251: INFO: (6) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:1080/proxy/: ... (200; 2.518618ms) May 1 16:56:22.252: INFO: (6) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 3.762158ms) May 1 16:56:22.255: INFO: (6) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx/proxy/: test (200; 6.591929ms) May 1 16:56:22.255: INFO: (6) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname2/proxy/: bar (200; 6.635511ms) May 1 16:56:22.255: INFO: (6) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 6.98244ms) May 1 16:56:22.255: INFO: (6) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 7.054897ms) May 1 16:56:22.255: INFO: (6) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 6.9957ms) May 1 16:56:22.255: INFO: (6) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:443/proxy/: test<... (200; 7.138383ms) May 1 16:56:22.256: INFO: (6) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:462/proxy/: tls qux (200; 7.319348ms) May 1 16:56:22.256: INFO: (6) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname1/proxy/: foo (200; 7.981087ms) May 1 16:56:22.256: INFO: (6) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname2/proxy/: tls qux (200; 8.119159ms) May 1 16:56:22.259: INFO: (7) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 2.771814ms) May 1 16:56:22.260: INFO: (7) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:460/proxy/: tls baz (200; 3.038065ms) May 1 16:56:22.260: INFO: (7) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 3.607336ms) May 1 16:56:22.260: INFO: (7) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:1080/proxy/: test<... (200; 3.62156ms) May 1 16:56:22.260: INFO: (7) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 3.672866ms) May 1 16:56:22.260: INFO: (7) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:1080/proxy/: ... (200; 3.744751ms) May 1 16:56:22.260: INFO: (7) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:443/proxy/: test (200; 3.780958ms) May 1 16:56:22.261: INFO: (7) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname2/proxy/: bar (200; 4.510351ms) May 1 16:56:22.261: INFO: (7) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname1/proxy/: foo (200; 4.472901ms) May 1 16:56:22.261: INFO: (7) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname2/proxy/: bar (200; 4.590642ms) May 1 16:56:22.261: INFO: (7) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname2/proxy/: tls qux (200; 4.626878ms) May 1 16:56:22.261: INFO: (7) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname1/proxy/: foo (200; 4.69104ms) May 1 16:56:22.262: INFO: (7) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname1/proxy/: tls baz (200; 5.027912ms) May 1 16:56:22.265: INFO: (8) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 3.524833ms) May 1 16:56:22.266: INFO: (8) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:443/proxy/: test (200; 4.51028ms) May 1 16:56:22.266: INFO: (8) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 4.608963ms) May 1 16:56:22.266: INFO: (8) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:1080/proxy/: test<... (200; 4.642188ms) May 1 16:56:22.267: INFO: (8) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:1080/proxy/: ... (200; 4.865428ms) May 1 16:56:22.268: INFO: (8) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname2/proxy/: bar (200; 5.893861ms) May 1 16:56:22.268: INFO: (8) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname1/proxy/: foo (200; 5.970187ms) May 1 16:56:22.268: INFO: (8) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname1/proxy/: foo (200; 6.029317ms) May 1 16:56:22.268: INFO: (8) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname2/proxy/: bar (200; 6.117987ms) May 1 16:56:22.268: INFO: (8) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname2/proxy/: tls qux (200; 6.031864ms) May 1 16:56:22.268: INFO: (8) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname1/proxy/: tls baz (200; 6.076041ms) May 1 16:56:22.268: INFO: (8) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:460/proxy/: tls baz (200; 6.157488ms) May 1 16:56:22.272: INFO: (9) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx/proxy/: test (200; 4.082816ms) May 1 16:56:22.272: INFO: (9) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:462/proxy/: tls qux (200; 4.038596ms) May 1 16:56:22.272: INFO: (9) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:460/proxy/: tls baz (200; 4.038978ms) May 1 16:56:22.272: INFO: (9) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname1/proxy/: foo (200; 4.512173ms) May 1 16:56:22.273: INFO: (9) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:1080/proxy/: test<... (200; 5.164535ms) May 1 16:56:22.273: INFO: (9) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname2/proxy/: tls qux (200; 5.187534ms) May 1 16:56:22.273: INFO: (9) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname1/proxy/: tls baz (200; 5.239952ms) May 1 16:56:22.273: INFO: (9) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:443/proxy/: ... (200; 5.247499ms) May 1 16:56:22.273: INFO: (9) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 5.273739ms) May 1 16:56:22.273: INFO: (9) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 5.24593ms) May 1 16:56:22.273: INFO: (9) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 5.219963ms) May 1 16:56:22.273: INFO: (9) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname2/proxy/: bar (200; 5.244928ms) May 1 16:56:22.273: INFO: (9) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 5.2999ms) May 1 16:56:22.273: INFO: (9) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname2/proxy/: bar (200; 5.282903ms) May 1 16:56:22.273: INFO: (9) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname1/proxy/: foo (200; 5.357934ms) May 1 16:56:22.278: INFO: (10) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:1080/proxy/: test<... (200; 4.283867ms) May 1 16:56:22.279: INFO: (10) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 5.299771ms) May 1 16:56:22.279: INFO: (10) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx/proxy/: test (200; 5.453877ms) May 1 16:56:22.279: INFO: (10) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 5.426005ms) May 1 16:56:22.279: INFO: (10) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 5.420782ms) May 1 16:56:22.279: INFO: (10) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 5.463384ms) May 1 16:56:22.279: INFO: (10) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:1080/proxy/: ... (200; 5.50975ms) May 1 16:56:22.279: INFO: (10) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:462/proxy/: tls qux (200; 5.638312ms) May 1 16:56:22.279: INFO: (10) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:460/proxy/: tls baz (200; 5.735977ms) May 1 16:56:22.279: INFO: (10) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:443/proxy/: test (200; 3.817482ms) May 1 16:56:22.285: INFO: (11) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 4.667186ms) May 1 16:56:22.285: INFO: (11) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:462/proxy/: tls qux (200; 4.697864ms) May 1 16:56:22.285: INFO: (11) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:460/proxy/: tls baz (200; 4.657756ms) May 1 16:56:22.285: INFO: (11) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 4.775902ms) May 1 16:56:22.285: INFO: (11) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:1080/proxy/: ... (200; 4.732252ms) May 1 16:56:22.285: INFO: (11) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:443/proxy/: test<... (200; 4.912541ms) May 1 16:56:22.285: INFO: (11) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 4.878742ms) May 1 16:56:22.285: INFO: (11) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname1/proxy/: foo (200; 5.104561ms) May 1 16:56:22.285: INFO: (11) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname1/proxy/: foo (200; 5.171417ms) May 1 16:56:22.286: INFO: (11) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname2/proxy/: bar (200; 5.307874ms) May 1 16:56:22.286: INFO: (11) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname1/proxy/: tls baz (200; 5.353322ms) May 1 16:56:22.286: INFO: (11) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname2/proxy/: bar (200; 5.487894ms) May 1 16:56:22.286: INFO: (11) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname2/proxy/: tls qux (200; 5.501197ms) May 1 16:56:22.289: INFO: (12) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx/proxy/: test (200; 3.337044ms) May 1 16:56:22.289: INFO: (12) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:1080/proxy/: test<... (200; 3.355831ms) May 1 16:56:22.289: INFO: (12) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:1080/proxy/: ... (200; 3.395889ms) May 1 16:56:22.289: INFO: (12) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 3.484381ms) May 1 16:56:22.290: INFO: (12) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:462/proxy/: tls qux (200; 3.643144ms) May 1 16:56:22.290: INFO: (12) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 3.733934ms) May 1 16:56:22.290: INFO: (12) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 3.827473ms) May 1 16:56:22.290: INFO: (12) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:443/proxy/: ... (200; 2.491911ms) May 1 16:56:22.295: INFO: (13) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 3.291896ms) May 1 16:56:22.295: INFO: (13) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 3.205968ms) May 1 16:56:22.296: INFO: (13) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:462/proxy/: tls qux (200; 4.080101ms) May 1 16:56:22.296: INFO: (13) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 4.043157ms) May 1 16:56:22.296: INFO: (13) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 4.304253ms) May 1 16:56:22.296: INFO: (13) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:1080/proxy/: test<... (200; 4.45512ms) May 1 16:56:22.296: INFO: (13) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname2/proxy/: tls qux (200; 4.367068ms) May 1 16:56:22.296: INFO: (13) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:443/proxy/: test (200; 4.698244ms) May 1 16:56:22.296: INFO: (13) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname2/proxy/: bar (200; 4.683892ms) May 1 16:56:22.296: INFO: (13) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname2/proxy/: bar (200; 4.717038ms) May 1 16:56:22.296: INFO: (13) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:460/proxy/: tls baz (200; 4.832897ms) May 1 16:56:22.296: INFO: (13) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname1/proxy/: foo (200; 4.800591ms) May 1 16:56:22.296: INFO: (13) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname1/proxy/: tls baz (200; 4.85189ms) May 1 16:56:22.300: INFO: (14) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 3.307631ms) May 1 16:56:22.300: INFO: (14) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 3.565919ms) May 1 16:56:22.300: INFO: (14) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:460/proxy/: tls baz (200; 3.756951ms) May 1 16:56:22.300: INFO: (14) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx/proxy/: test (200; 3.783557ms) May 1 16:56:22.301: INFO: (14) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:1080/proxy/: test<... (200; 3.757199ms) May 1 16:56:22.301: INFO: (14) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 4.050795ms) May 1 16:56:22.301: INFO: (14) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:1080/proxy/: ... (200; 3.9775ms) May 1 16:56:22.301: INFO: (14) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:462/proxy/: tls qux (200; 4.003241ms) May 1 16:56:22.301: INFO: (14) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:443/proxy/: test<... (200; 1.654893ms) May 1 16:56:22.305: INFO: (15) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 3.590852ms) May 1 16:56:22.305: INFO: (15) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:1080/proxy/: ... (200; 3.964769ms) May 1 16:56:22.306: INFO: (15) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 4.60488ms) May 1 16:56:22.306: INFO: (15) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname1/proxy/: foo (200; 4.614251ms) May 1 16:56:22.306: INFO: (15) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:460/proxy/: tls baz (200; 4.623419ms) May 1 16:56:22.306: INFO: (15) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 4.678442ms) May 1 16:56:22.306: INFO: (15) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx/proxy/: test (200; 4.656948ms) May 1 16:56:22.306: INFO: (15) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:443/proxy/: test (200; 3.467125ms) May 1 16:56:22.310: INFO: (16) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 3.429377ms) May 1 16:56:22.310: INFO: (16) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:462/proxy/: tls qux (200; 3.477144ms) May 1 16:56:22.310: INFO: (16) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 3.417507ms) May 1 16:56:22.310: INFO: (16) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:1080/proxy/: test<... (200; 3.554995ms) May 1 16:56:22.311: INFO: (16) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:1080/proxy/: ... (200; 4.223795ms) May 1 16:56:22.311: INFO: (16) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:460/proxy/: tls baz (200; 4.519907ms) May 1 16:56:22.312: INFO: (16) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname1/proxy/: foo (200; 4.980826ms) May 1 16:56:22.312: INFO: (16) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname2/proxy/: tls qux (200; 5.075312ms) May 1 16:56:22.312: INFO: (16) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname2/proxy/: bar (200; 5.147446ms) May 1 16:56:22.312: INFO: (16) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname2/proxy/: bar (200; 5.112169ms) May 1 16:56:22.312: INFO: (16) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname1/proxy/: tls baz (200; 5.216011ms) May 1 16:56:22.312: INFO: (16) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname1/proxy/: foo (200; 5.171799ms) May 1 16:56:22.318: INFO: (17) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 6.164806ms) May 1 16:56:22.318: INFO: (17) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:462/proxy/: tls qux (200; 6.206544ms) May 1 16:56:22.318: INFO: (17) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:1080/proxy/: test<... (200; 6.221304ms) May 1 16:56:22.318: INFO: (17) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 6.334781ms) May 1 16:56:22.318: INFO: (17) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:1080/proxy/: ... (200; 6.312715ms) May 1 16:56:22.318: INFO: (17) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 6.464433ms) May 1 16:56:22.318: INFO: (17) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:460/proxy/: tls baz (200; 6.425918ms) May 1 16:56:22.319: INFO: (17) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx/proxy/: test (200; 6.545702ms) May 1 16:56:22.319: INFO: (17) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:443/proxy/: test<... (200; 3.766572ms) May 1 16:56:22.324: INFO: (18) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx/proxy/: test (200; 3.772426ms) May 1 16:56:22.324: INFO: (18) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:460/proxy/: tls baz (200; 3.912054ms) May 1 16:56:22.325: INFO: (18) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 4.127745ms) May 1 16:56:22.325: INFO: (18) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:1080/proxy/: ... (200; 4.437572ms) May 1 16:56:22.325: INFO: (18) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 4.320473ms) May 1 16:56:22.325: INFO: (18) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:462/proxy/: tls qux (200; 4.802309ms) May 1 16:56:22.325: INFO: (18) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 4.720921ms) May 1 16:56:22.325: INFO: (18) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:443/proxy/: test (200; 4.085859ms) May 1 16:56:22.331: INFO: (19) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname2/proxy/: bar (200; 4.325887ms) May 1 16:56:22.331: INFO: (19) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname1/proxy/: tls baz (200; 4.460684ms) May 1 16:56:22.331: INFO: (19) /api/v1/namespaces/proxy-4064/services/http:proxy-service-dgfwl:portname1/proxy/: foo (200; 4.527604ms) May 1 16:56:22.331: INFO: (19) /api/v1/namespaces/proxy-4064/services/https:proxy-service-dgfwl:tlsportname2/proxy/: tls qux (200; 4.483775ms) May 1 16:56:22.331: INFO: (19) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:460/proxy/: tls baz (200; 4.396453ms) May 1 16:56:22.332: INFO: (19) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 5.101342ms) May 1 16:56:22.332: INFO: (19) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:160/proxy/: foo (200; 5.363636ms) May 1 16:56:22.332: INFO: (19) /api/v1/namespaces/proxy-4064/pods/proxy-service-dgfwl-q8dwx:1080/proxy/: test<... (200; 5.377949ms) May 1 16:56:22.332: INFO: (19) /api/v1/namespaces/proxy-4064/services/proxy-service-dgfwl:portname1/proxy/: foo (200; 5.55159ms) May 1 16:56:22.332: INFO: (19) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:443/proxy/: ... (200; 5.573553ms) May 1 16:56:22.332: INFO: (19) /api/v1/namespaces/proxy-4064/pods/https:proxy-service-dgfwl-q8dwx:462/proxy/: tls qux (200; 5.606263ms) May 1 16:56:22.332: INFO: (19) /api/v1/namespaces/proxy-4064/pods/http:proxy-service-dgfwl-q8dwx:162/proxy/: bar (200; 5.465687ms) STEP: deleting ReplicationController proxy-service-dgfwl in namespace proxy-4064, will wait for the garbage collector to delete the pods May 1 16:56:22.391: INFO: Deleting ReplicationController proxy-service-dgfwl took: 6.663906ms May 1 16:56:22.691: INFO: Terminating ReplicationController proxy-service-dgfwl pods took: 300.243642ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:56:32.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4064" for this suite. May 1 16:56:38.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:56:38.336: INFO: namespace proxy-4064 deletion completed in 6.141722638s • [SLOW TEST:30.017 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:56:38.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-87c63e35-c4f4-4df5-9b04-ea47d2a97510 STEP: Creating a pod to test consume configMaps May 1 16:56:38.488: INFO: Waiting up to 5m0s for pod "pod-configmaps-be3b2e23-9a25-466d-ac74-2ae8507f9d39" in namespace "configmap-473" to be "success or failure" May 1 16:56:38.546: INFO: Pod "pod-configmaps-be3b2e23-9a25-466d-ac74-2ae8507f9d39": Phase="Pending", Reason="", readiness=false. Elapsed: 57.938564ms May 1 16:56:40.551: INFO: Pod "pod-configmaps-be3b2e23-9a25-466d-ac74-2ae8507f9d39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063350841s May 1 16:56:42.555: INFO: Pod "pod-configmaps-be3b2e23-9a25-466d-ac74-2ae8507f9d39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067689575s May 1 16:56:44.559: INFO: Pod "pod-configmaps-be3b2e23-9a25-466d-ac74-2ae8507f9d39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.071268164s STEP: Saw pod success May 1 16:56:44.559: INFO: Pod "pod-configmaps-be3b2e23-9a25-466d-ac74-2ae8507f9d39" satisfied condition "success or failure" May 1 16:56:44.562: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-be3b2e23-9a25-466d-ac74-2ae8507f9d39 container configmap-volume-test: STEP: delete the pod May 1 16:56:44.638: INFO: Waiting for pod pod-configmaps-be3b2e23-9a25-466d-ac74-2ae8507f9d39 to disappear May 1 16:56:44.715: INFO: Pod pod-configmaps-be3b2e23-9a25-466d-ac74-2ae8507f9d39 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:56:44.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-473" for this suite. May 1 16:56:50.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:56:50.863: INFO: namespace configmap-473 deletion completed in 6.144742906s • [SLOW TEST:12.526 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:56:50.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-cf11984b-8bfc-4b03-a4fb-106f41f145e9 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:56:50.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4085" for this suite. May 1 16:56:56.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:56:57.043: INFO: namespace configmap-4085 deletion completed in 6.09299704s • [SLOW TEST:6.179 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:56:57.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command May 1 16:56:57.097: INFO: Waiting up to 5m0s for pod "var-expansion-e6e54b0d-2415-4cd9-8ab2-9f7da13dcabf" in namespace "var-expansion-5845" to be "success or failure" May 1 16:56:57.110: INFO: Pod "var-expansion-e6e54b0d-2415-4cd9-8ab2-9f7da13dcabf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.46873ms May 1 16:56:59.114: INFO: Pod "var-expansion-e6e54b0d-2415-4cd9-8ab2-9f7da13dcabf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017009349s May 1 16:57:01.119: INFO: Pod "var-expansion-e6e54b0d-2415-4cd9-8ab2-9f7da13dcabf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021271181s STEP: Saw pod success May 1 16:57:01.119: INFO: Pod "var-expansion-e6e54b0d-2415-4cd9-8ab2-9f7da13dcabf" satisfied condition "success or failure" May 1 16:57:01.122: INFO: Trying to get logs from node iruya-worker pod var-expansion-e6e54b0d-2415-4cd9-8ab2-9f7da13dcabf container dapi-container: STEP: delete the pod May 1 16:57:01.293: INFO: Waiting for pod var-expansion-e6e54b0d-2415-4cd9-8ab2-9f7da13dcabf to disappear May 1 16:57:01.308: INFO: Pod var-expansion-e6e54b0d-2415-4cd9-8ab2-9f7da13dcabf no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:57:01.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5845" for this suite. May 1 16:57:07.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:57:07.547: INFO: namespace var-expansion-5845 deletion completed in 6.235513812s • [SLOW TEST:10.503 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:57:07.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 16:57:07.618: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.771007ms) May 1 16:57:07.621: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.257959ms) May 1 16:57:07.624: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.727346ms) May 1 16:57:07.627: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.127158ms) May 1 16:57:07.630: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.890221ms) May 1 16:57:07.633: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.839496ms) May 1 16:57:07.661: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 27.432527ms) May 1 16:57:07.664: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.754279ms) May 1 16:57:07.667: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.826179ms) May 1 16:57:07.670: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.834413ms) May 1 16:57:07.673: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.762039ms) May 1 16:57:07.675: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.601057ms) May 1 16:57:07.678: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.27315ms) May 1 16:57:07.680: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.17107ms) May 1 16:57:07.682: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.350978ms) May 1 16:57:07.685: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.145891ms) May 1 16:57:07.687: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.650926ms) May 1 16:57:07.690: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.72611ms) May 1 16:57:07.692: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.323617ms) May 1 16:57:07.695: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.823124ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:57:07.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5064" for this suite. May 1 16:57:13.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:57:13.892: INFO: namespace proxy-5064 deletion completed in 6.194282139s • [SLOW TEST:6.345 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:57:13.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-c047ef9c-b859-4fba-a329-48ce863b2b86 STEP: Creating a pod to test consume configMaps May 1 16:57:14.316: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1b0bfae5-f22f-42de-bda2-b718d236c448" in namespace "projected-2341" to be "success or failure" May 1 16:57:14.326: INFO: Pod "pod-projected-configmaps-1b0bfae5-f22f-42de-bda2-b718d236c448": Phase="Pending", Reason="", readiness=false. Elapsed: 9.66799ms May 1 16:57:16.348: INFO: Pod "pod-projected-configmaps-1b0bfae5-f22f-42de-bda2-b718d236c448": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032440369s May 1 16:57:18.353: INFO: Pod "pod-projected-configmaps-1b0bfae5-f22f-42de-bda2-b718d236c448": Phase="Running", Reason="", readiness=true. Elapsed: 4.037024018s May 1 16:57:20.357: INFO: Pod "pod-projected-configmaps-1b0bfae5-f22f-42de-bda2-b718d236c448": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040829977s STEP: Saw pod success May 1 16:57:20.357: INFO: Pod "pod-projected-configmaps-1b0bfae5-f22f-42de-bda2-b718d236c448" satisfied condition "success or failure" May 1 16:57:20.360: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-1b0bfae5-f22f-42de-bda2-b718d236c448 container projected-configmap-volume-test: STEP: delete the pod May 1 16:57:20.381: INFO: Waiting for pod pod-projected-configmaps-1b0bfae5-f22f-42de-bda2-b718d236c448 to disappear May 1 16:57:20.386: INFO: Pod pod-projected-configmaps-1b0bfae5-f22f-42de-bda2-b718d236c448 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:57:20.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2341" for this suite. May 1 16:57:26.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:57:26.466: INFO: namespace projected-2341 deletion completed in 6.076381462s • [SLOW TEST:12.573 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:57:26.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0501 16:57:57.115995 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 16:57:57.116: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:57:57.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-268" for this suite. May 1 16:58:07.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:58:07.206: INFO: namespace gc-268 deletion completed in 10.087663654s • [SLOW TEST:40.740 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:58:07.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-9bb6ac56-65a8-4123-8dba-90b931b51ba0 STEP: Creating a pod to test consume secrets May 1 16:58:07.305: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9850a795-cb29-4ffd-ad35-cf57f4043efb" in namespace "projected-5660" to be "success or failure" May 1 16:58:07.330: INFO: Pod "pod-projected-secrets-9850a795-cb29-4ffd-ad35-cf57f4043efb": Phase="Pending", Reason="", readiness=false. Elapsed: 24.763851ms May 1 16:58:09.334: INFO: Pod "pod-projected-secrets-9850a795-cb29-4ffd-ad35-cf57f4043efb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028605337s May 1 16:58:11.337: INFO: Pod "pod-projected-secrets-9850a795-cb29-4ffd-ad35-cf57f4043efb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032201783s STEP: Saw pod success May 1 16:58:11.338: INFO: Pod "pod-projected-secrets-9850a795-cb29-4ffd-ad35-cf57f4043efb" satisfied condition "success or failure" May 1 16:58:11.340: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-9850a795-cb29-4ffd-ad35-cf57f4043efb container projected-secret-volume-test: STEP: delete the pod May 1 16:58:11.370: INFO: Waiting for pod pod-projected-secrets-9850a795-cb29-4ffd-ad35-cf57f4043efb to disappear May 1 16:58:11.432: INFO: Pod pod-projected-secrets-9850a795-cb29-4ffd-ad35-cf57f4043efb no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:58:11.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5660" for this suite. May 1 16:58:17.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:58:17.535: INFO: namespace projected-5660 deletion completed in 6.098629855s • [SLOW TEST:10.328 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:58:17.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 1 16:58:17.631: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6947,SelfLink:/api/v1/namespaces/watch-6947/configmaps/e2e-watch-test-resource-version,UID:280c5b06-fc06-4bbf-a48e-3a265b61ae79,ResourceVersion:8475034,Generation:0,CreationTimestamp:2020-05-01 16:58:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 1 16:58:17.631: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6947,SelfLink:/api/v1/namespaces/watch-6947/configmaps/e2e-watch-test-resource-version,UID:280c5b06-fc06-4bbf-a48e-3a265b61ae79,ResourceVersion:8475035,Generation:0,CreationTimestamp:2020-05-01 16:58:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:58:17.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6947" for this suite. May 1 16:58:23.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:58:23.716: INFO: namespace watch-6947 deletion completed in 6.081879118s • [SLOW TEST:6.181 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:58:23.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod May 1 16:58:23.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-109' May 1 16:58:24.108: INFO: stderr: "" May 1 16:58:24.108: INFO: stdout: "pod/pause created\n" May 1 16:58:24.108: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 1 16:58:24.108: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-109" to be "running and ready" May 1 16:58:24.136: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 28.040324ms May 1 16:58:26.155: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046882805s May 1 16:58:28.159: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.051503489s May 1 16:58:28.159: INFO: Pod "pause" satisfied condition "running and ready" May 1 16:58:28.159: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod May 1 16:58:28.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-109' May 1 16:58:28.260: INFO: stderr: "" May 1 16:58:28.260: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 1 16:58:28.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-109' May 1 16:58:28.350: INFO: stderr: "" May 1 16:58:28.350: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 1 16:58:28.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-109' May 1 16:58:28.442: INFO: stderr: "" May 1 16:58:28.442: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 1 16:58:28.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-109' May 1 16:58:28.565: INFO: stderr: "" May 1 16:58:28.566: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources May 1 16:58:28.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-109' May 1 16:58:28.727: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 16:58:28.727: INFO: stdout: "pod \"pause\" force deleted\n" May 1 16:58:28.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-109' May 1 16:58:28.912: INFO: stderr: "No resources found.\n" May 1 16:58:28.912: INFO: stdout: "" May 1 16:58:28.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-109 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 1 16:58:29.002: INFO: stderr: "" May 1 16:58:29.002: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:58:29.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-109" for this suite. May 1 16:58:35.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:58:35.128: INFO: namespace kubectl-109 deletion completed in 6.122454117s • [SLOW TEST:11.411 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:58:35.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-a10f1415-9f29-4c1d-9489-12dd5c83b7e3 STEP: Creating secret with name s-test-opt-upd-63136191-d963-4076-bc2b-5876f258f5e3 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a10f1415-9f29-4c1d-9489-12dd5c83b7e3 STEP: Updating secret s-test-opt-upd-63136191-d963-4076-bc2b-5876f258f5e3 STEP: Creating secret with name s-test-opt-create-4aeea3da-43f5-454f-822c-9bd7938b74e3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:58:45.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3280" for this suite. May 1 16:59:07.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:59:07.817: INFO: namespace projected-3280 deletion completed in 22.099179148s • [SLOW TEST:32.688 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:59:07.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 1 16:59:07.995: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc8a8bc5-b4cf-4bd4-868a-04656fa2a24a" in namespace "projected-5681" to be "success or failure" May 1 16:59:08.012: INFO: Pod "downwardapi-volume-dc8a8bc5-b4cf-4bd4-868a-04656fa2a24a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.988177ms May 1 16:59:10.099: INFO: Pod "downwardapi-volume-dc8a8bc5-b4cf-4bd4-868a-04656fa2a24a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104391817s May 1 16:59:12.104: INFO: Pod "downwardapi-volume-dc8a8bc5-b4cf-4bd4-868a-04656fa2a24a": Phase="Running", Reason="", readiness=true. Elapsed: 4.109202516s May 1 16:59:14.108: INFO: Pod "downwardapi-volume-dc8a8bc5-b4cf-4bd4-868a-04656fa2a24a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.113267864s STEP: Saw pod success May 1 16:59:14.108: INFO: Pod "downwardapi-volume-dc8a8bc5-b4cf-4bd4-868a-04656fa2a24a" satisfied condition "success or failure" May 1 16:59:14.112: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-dc8a8bc5-b4cf-4bd4-868a-04656fa2a24a container client-container: STEP: delete the pod May 1 16:59:14.148: INFO: Waiting for pod downwardapi-volume-dc8a8bc5-b4cf-4bd4-868a-04656fa2a24a to disappear May 1 16:59:14.173: INFO: Pod downwardapi-volume-dc8a8bc5-b4cf-4bd4-868a-04656fa2a24a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:59:14.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5681" for this suite. May 1 16:59:20.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 16:59:20.300: INFO: namespace projected-5681 deletion completed in 6.123407043s • [SLOW TEST:12.483 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 16:59:20.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 1 16:59:24.406: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-6188dadd-b828-4d07-badc-49d3761514eb,GenerateName:,Namespace:events-5622,SelfLink:/api/v1/namespaces/events-5622/pods/send-events-6188dadd-b828-4d07-badc-49d3761514eb,UID:d00eb0ff-c1c2-49f9-abdb-4c2625481848,ResourceVersion:8475272,Generation:0,CreationTimestamp:2020-05-01 16:59:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 348435953,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-d22pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-d22pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-d22pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e63fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e63fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:59:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:59:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:59:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 16:59:20 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.142,StartTime:2020-05-01 16:59:20 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-01 16:59:22 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://462a232ca08903ae275ca65f51ff47ed12850c3bc2618f7edd50cfb8ec3f29a1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 1 16:59:26.413: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 1 16:59:28.417: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 16:59:28.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5622" for this suite. May 1 17:00:16.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:00:16.528: INFO: namespace events-5622 deletion completed in 48.097129165s • [SLOW TEST:56.228 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:00:16.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 1 17:00:22.790: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 1 17:00:32.891: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:00:32.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7026" for this suite. May 1 17:00:39.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:00:39.083: INFO: namespace pods-7026 deletion completed in 6.183803712s • [SLOW TEST:22.554 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:00:39.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ddd974a2-5442-496c-9692-7343196e912c STEP: Creating a pod to test consume secrets May 1 17:00:39.692: INFO: Waiting up to 5m0s for pod "pod-secrets-384a0e98-1dcd-4bc8-9dd8-15ca89fa3b4f" in namespace "secrets-8422" to be "success or failure" May 1 17:00:39.873: INFO: Pod "pod-secrets-384a0e98-1dcd-4bc8-9dd8-15ca89fa3b4f": Phase="Pending", Reason="", readiness=false. Elapsed: 180.786619ms May 1 17:00:41.951: INFO: Pod "pod-secrets-384a0e98-1dcd-4bc8-9dd8-15ca89fa3b4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.258645093s May 1 17:00:44.027: INFO: Pod "pod-secrets-384a0e98-1dcd-4bc8-9dd8-15ca89fa3b4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.334342197s STEP: Saw pod success May 1 17:00:44.027: INFO: Pod "pod-secrets-384a0e98-1dcd-4bc8-9dd8-15ca89fa3b4f" satisfied condition "success or failure" May 1 17:00:44.351: INFO: Trying to get logs from node iruya-worker pod pod-secrets-384a0e98-1dcd-4bc8-9dd8-15ca89fa3b4f container secret-volume-test: STEP: delete the pod May 1 17:00:44.827: INFO: Waiting for pod pod-secrets-384a0e98-1dcd-4bc8-9dd8-15ca89fa3b4f to disappear May 1 17:00:44.876: INFO: Pod pod-secrets-384a0e98-1dcd-4bc8-9dd8-15ca89fa3b4f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:00:44.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8422" for this suite. May 1 17:00:50.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:00:51.072: INFO: namespace secrets-8422 deletion completed in 6.165201011s • [SLOW TEST:11.989 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:00:51.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-32c559bc-4c22-44b2-a81c-3312b0aceef3 May 1 17:00:51.373: INFO: Pod name my-hostname-basic-32c559bc-4c22-44b2-a81c-3312b0aceef3: Found 0 pods out of 1 May 1 17:00:56.378: INFO: Pod name my-hostname-basic-32c559bc-4c22-44b2-a81c-3312b0aceef3: Found 1 pods out of 1 May 1 17:00:56.378: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-32c559bc-4c22-44b2-a81c-3312b0aceef3" are running May 1 17:00:56.381: INFO: Pod "my-hostname-basic-32c559bc-4c22-44b2-a81c-3312b0aceef3-gpnzm" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 17:00:51 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 17:00:55 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 17:00:55 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 17:00:51 +0000 UTC Reason: Message:}]) May 1 17:00:56.381: INFO: Trying to dial the pod May 1 17:01:01.391: INFO: Controller my-hostname-basic-32c559bc-4c22-44b2-a81c-3312b0aceef3: Got expected result from replica 1 [my-hostname-basic-32c559bc-4c22-44b2-a81c-3312b0aceef3-gpnzm]: "my-hostname-basic-32c559bc-4c22-44b2-a81c-3312b0aceef3-gpnzm", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:01:01.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3010" for this suite. May 1 17:01:07.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:01:07.617: INFO: namespace replication-controller-3010 deletion completed in 6.223925984s • [SLOW TEST:16.545 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:01:07.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all May 1 17:01:08.587: INFO: Waiting up to 5m0s for pod "client-containers-65f61116-6fd0-4963-992d-46b6c7a5e40a" in namespace "containers-5830" to be "success or failure" May 1 17:01:08.890: INFO: Pod "client-containers-65f61116-6fd0-4963-992d-46b6c7a5e40a": Phase="Pending", Reason="", readiness=false. Elapsed: 302.303676ms May 1 17:01:10.893: INFO: Pod "client-containers-65f61116-6fd0-4963-992d-46b6c7a5e40a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305957875s May 1 17:01:12.898: INFO: Pod "client-containers-65f61116-6fd0-4963-992d-46b6c7a5e40a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310490176s May 1 17:01:14.920: INFO: Pod "client-containers-65f61116-6fd0-4963-992d-46b6c7a5e40a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.333014215s STEP: Saw pod success May 1 17:01:14.920: INFO: Pod "client-containers-65f61116-6fd0-4963-992d-46b6c7a5e40a" satisfied condition "success or failure" May 1 17:01:14.923: INFO: Trying to get logs from node iruya-worker pod client-containers-65f61116-6fd0-4963-992d-46b6c7a5e40a container test-container: STEP: delete the pod May 1 17:01:15.216: INFO: Waiting for pod client-containers-65f61116-6fd0-4963-992d-46b6c7a5e40a to disappear May 1 17:01:15.597: INFO: Pod client-containers-65f61116-6fd0-4963-992d-46b6c7a5e40a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:01:15.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5830" for this suite. May 1 17:01:24.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:01:24.310: INFO: namespace containers-5830 deletion completed in 8.683887883s • [SLOW TEST:16.692 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:01:24.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5025 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 1 17:01:25.400: INFO: Found 0 stateful pods, waiting for 3 May 1 17:01:35.405: INFO: Found 2 stateful pods, waiting for 3 May 1 17:01:45.406: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 17:01:45.406: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 17:01:45.406: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 1 17:01:45.431: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 1 17:01:55.610: INFO: Updating stateful set ss2 May 1 17:01:55.689: INFO: Waiting for Pod statefulset-5025/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 1 17:02:06.459: INFO: Found 2 stateful pods, waiting for 3 May 1 17:02:16.464: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 17:02:16.464: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 17:02:16.464: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 1 17:02:16.488: INFO: Updating stateful set ss2 May 1 17:02:16.541: INFO: Waiting for Pod statefulset-5025/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 1 17:02:26.567: INFO: Updating stateful set ss2 May 1 17:02:26.579: INFO: Waiting for StatefulSet statefulset-5025/ss2 to complete update May 1 17:02:26.579: INFO: Waiting for Pod statefulset-5025/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 1 17:02:36.587: INFO: Deleting all statefulset in ns statefulset-5025 May 1 17:02:36.590: INFO: Scaling statefulset ss2 to 0 May 1 17:02:56.621: INFO: Waiting for statefulset status.replicas updated to 0 May 1 17:02:56.624: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:02:56.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5025" for this suite. May 1 17:03:04.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:03:04.744: INFO: namespace statefulset-5025 deletion completed in 8.100136781s • [SLOW TEST:100.435 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:03:04.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-6c9a0ef1-b4a7-4a17-9eca-6891c2db2136 STEP: Creating a pod to test consume secrets May 1 17:03:04.861: INFO: Waiting up to 5m0s for pod "pod-secrets-b03b2912-eb43-4cb4-96e5-71a757022e0e" in namespace "secrets-5825" to be "success or failure" May 1 17:03:04.926: INFO: Pod "pod-secrets-b03b2912-eb43-4cb4-96e5-71a757022e0e": Phase="Pending", Reason="", readiness=false. Elapsed: 64.587908ms May 1 17:03:06.930: INFO: Pod "pod-secrets-b03b2912-eb43-4cb4-96e5-71a757022e0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06870438s May 1 17:03:08.934: INFO: Pod "pod-secrets-b03b2912-eb43-4cb4-96e5-71a757022e0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072834441s STEP: Saw pod success May 1 17:03:08.934: INFO: Pod "pod-secrets-b03b2912-eb43-4cb4-96e5-71a757022e0e" satisfied condition "success or failure" May 1 17:03:08.937: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-b03b2912-eb43-4cb4-96e5-71a757022e0e container secret-env-test: STEP: delete the pod May 1 17:03:08.959: INFO: Waiting for pod pod-secrets-b03b2912-eb43-4cb4-96e5-71a757022e0e to disappear May 1 17:03:08.986: INFO: Pod pod-secrets-b03b2912-eb43-4cb4-96e5-71a757022e0e no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:03:08.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5825" for this suite. May 1 17:03:15.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:03:16.653: INFO: namespace secrets-5825 deletion completed in 7.663615053s • [SLOW TEST:11.908 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:03:16.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 1 17:03:16.954: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 1 17:03:16.968: INFO: Waiting for terminating namespaces to be deleted... May 1 17:03:16.970: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 1 17:03:16.976: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 1 17:03:16.976: INFO: Container kube-proxy ready: true, restart count 0 May 1 17:03:16.976: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 1 17:03:16.976: INFO: Container kindnet-cni ready: true, restart count 0 May 1 17:03:16.976: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 1 17:03:16.982: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 1 17:03:16.982: INFO: Container coredns ready: true, restart count 0 May 1 17:03:16.982: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 1 17:03:16.982: INFO: Container coredns ready: true, restart count 0 May 1 17:03:16.982: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 1 17:03:16.982: INFO: Container kube-proxy ready: true, restart count 0 May 1 17:03:16.982: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 1 17:03:16.982: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-164c1183-d02b-4725-910b-7439c73efb8d 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-164c1183-d02b-4725-910b-7439c73efb8d off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-164c1183-d02b-4725-910b-7439c73efb8d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:03:29.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6602" for this suite. May 1 17:03:47.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:03:47.531: INFO: namespace sched-pred-6602 deletion completed in 18.078763718s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:30.877 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:03:47.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 1 17:03:48.925: INFO: Pod name wrapped-volume-race-5ae97afb-fc80-4257-8f3e-df407d062c87: Found 0 pods out of 5 May 1 17:03:53.931: INFO: Pod name wrapped-volume-race-5ae97afb-fc80-4257-8f3e-df407d062c87: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5ae97afb-fc80-4257-8f3e-df407d062c87 in namespace emptydir-wrapper-4583, will wait for the garbage collector to delete the pods May 1 17:04:08.284: INFO: Deleting ReplicationController wrapped-volume-race-5ae97afb-fc80-4257-8f3e-df407d062c87 took: 8.164345ms May 1 17:04:08.684: INFO: Terminating ReplicationController wrapped-volume-race-5ae97afb-fc80-4257-8f3e-df407d062c87 pods took: 400.278305ms STEP: Creating RC which spawns configmap-volume pods May 1 17:05:04.457: INFO: Pod name wrapped-volume-race-d4c88905-3fc6-41f5-951c-79e49bbca914: Found 0 pods out of 5 May 1 17:05:09.594: INFO: Pod name wrapped-volume-race-d4c88905-3fc6-41f5-951c-79e49bbca914: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d4c88905-3fc6-41f5-951c-79e49bbca914 in namespace emptydir-wrapper-4583, will wait for the garbage collector to delete the pods May 1 17:05:35.484: INFO: Deleting ReplicationController wrapped-volume-race-d4c88905-3fc6-41f5-951c-79e49bbca914 took: 7.216871ms May 1 17:05:36.184: INFO: Terminating ReplicationController wrapped-volume-race-d4c88905-3fc6-41f5-951c-79e49bbca914 pods took: 700.36709ms STEP: Creating RC which spawns configmap-volume pods May 1 17:06:22.609: INFO: Pod name wrapped-volume-race-b45f9d34-937a-4767-9676-6d34e7f31298: Found 0 pods out of 5 May 1 17:06:27.648: INFO: Pod name wrapped-volume-race-b45f9d34-937a-4767-9676-6d34e7f31298: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b45f9d34-937a-4767-9676-6d34e7f31298 in namespace emptydir-wrapper-4583, will wait for the garbage collector to delete the pods May 1 17:06:43.780: INFO: Deleting ReplicationController wrapped-volume-race-b45f9d34-937a-4767-9676-6d34e7f31298 took: 26.000242ms May 1 17:06:44.080: INFO: Terminating ReplicationController wrapped-volume-race-b45f9d34-937a-4767-9676-6d34e7f31298 pods took: 300.240118ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:07:24.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4583" for this suite. May 1 17:07:34.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:07:34.893: INFO: namespace emptydir-wrapper-4583 deletion completed in 10.113344475s • [SLOW TEST:227.362 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:07:34.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 1 17:07:45.012: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-658 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 17:07:45.012: INFO: >>> kubeConfig: /root/.kube/config I0501 17:07:45.042028 6 log.go:172] (0xc0017a26e0) (0xc001c82a00) Create stream I0501 17:07:45.042055 6 log.go:172] (0xc0017a26e0) (0xc001c82a00) Stream added, broadcasting: 1 I0501 17:07:45.044070 6 log.go:172] (0xc0017a26e0) Reply frame received for 1 I0501 17:07:45.044106 6 log.go:172] (0xc0017a26e0) (0xc001c82aa0) Create stream I0501 17:07:45.044117 6 log.go:172] (0xc0017a26e0) (0xc001c82aa0) Stream added, broadcasting: 3 I0501 17:07:45.045040 6 log.go:172] (0xc0017a26e0) Reply frame received for 3 I0501 17:07:45.045079 6 log.go:172] (0xc0017a26e0) (0xc000396960) Create stream I0501 17:07:45.045093 6 log.go:172] (0xc0017a26e0) (0xc000396960) Stream added, broadcasting: 5 I0501 17:07:45.046356 6 log.go:172] (0xc0017a26e0) Reply frame received for 5 I0501 17:07:45.115843 6 log.go:172] (0xc0017a26e0) Data frame received for 5 I0501 17:07:45.115879 6 log.go:172] (0xc000396960) (5) Data frame handling I0501 17:07:45.115904 6 log.go:172] (0xc0017a26e0) Data frame received for 3 I0501 17:07:45.115915 6 log.go:172] (0xc001c82aa0) (3) Data frame handling I0501 17:07:45.115928 6 log.go:172] (0xc001c82aa0) (3) Data frame sent I0501 17:07:45.115939 6 log.go:172] (0xc0017a26e0) Data frame received for 3 I0501 17:07:45.115948 6 log.go:172] (0xc001c82aa0) (3) Data frame handling I0501 17:07:45.117797 6 log.go:172] (0xc0017a26e0) Data frame received for 1 I0501 17:07:45.117825 6 log.go:172] (0xc001c82a00) (1) Data frame handling I0501 17:07:45.117839 6 log.go:172] (0xc001c82a00) (1) Data frame sent I0501 17:07:45.117851 6 log.go:172] (0xc0017a26e0) (0xc001c82a00) Stream removed, broadcasting: 1 I0501 17:07:45.117908 6 log.go:172] (0xc0017a26e0) Go away received I0501 17:07:45.118305 6 log.go:172] (0xc0017a26e0) (0xc001c82a00) Stream removed, broadcasting: 1 I0501 17:07:45.118326 6 log.go:172] (0xc0017a26e0) (0xc001c82aa0) Stream removed, broadcasting: 3 I0501 17:07:45.118336 6 log.go:172] (0xc0017a26e0) (0xc000396960) Stream removed, broadcasting: 5 May 1 17:07:45.118: INFO: Exec stderr: "" May 1 17:07:45.118: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-658 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 17:07:45.118: INFO: >>> kubeConfig: /root/.kube/config I0501 17:07:45.155810 6 log.go:172] (0xc001692790) (0xc000396fa0) Create stream I0501 17:07:45.155842 6 log.go:172] (0xc001692790) (0xc000396fa0) Stream added, broadcasting: 1 I0501 17:07:45.158416 6 log.go:172] (0xc001692790) Reply frame received for 1 I0501 17:07:45.158461 6 log.go:172] (0xc001692790) (0xc00235a8c0) Create stream I0501 17:07:45.158476 6 log.go:172] (0xc001692790) (0xc00235a8c0) Stream added, broadcasting: 3 I0501 17:07:45.159355 6 log.go:172] (0xc001692790) Reply frame received for 3 I0501 17:07:45.159409 6 log.go:172] (0xc001692790) (0xc000397220) Create stream I0501 17:07:45.159433 6 log.go:172] (0xc001692790) (0xc000397220) Stream added, broadcasting: 5 I0501 17:07:45.160446 6 log.go:172] (0xc001692790) Reply frame received for 5 I0501 17:07:45.216203 6 log.go:172] (0xc001692790) Data frame received for 5 I0501 17:07:45.216244 6 log.go:172] (0xc000397220) (5) Data frame handling I0501 17:07:45.216271 6 log.go:172] (0xc001692790) Data frame received for 3 I0501 17:07:45.216289 6 log.go:172] (0xc00235a8c0) (3) Data frame handling I0501 17:07:45.216320 6 log.go:172] (0xc00235a8c0) (3) Data frame sent I0501 17:07:45.216336 6 log.go:172] (0xc001692790) Data frame received for 3 I0501 17:07:45.216349 6 log.go:172] (0xc00235a8c0) (3) Data frame handling I0501 17:07:45.217499 6 log.go:172] (0xc001692790) Data frame received for 1 I0501 17:07:45.217625 6 log.go:172] (0xc000396fa0) (1) Data frame handling I0501 17:07:45.217648 6 log.go:172] (0xc000396fa0) (1) Data frame sent I0501 17:07:45.217660 6 log.go:172] (0xc001692790) (0xc000396fa0) Stream removed, broadcasting: 1 I0501 17:07:45.217682 6 log.go:172] (0xc001692790) Go away received I0501 17:07:45.217772 6 log.go:172] (0xc001692790) (0xc000396fa0) Stream removed, broadcasting: 1 I0501 17:07:45.217794 6 log.go:172] (0xc001692790) (0xc00235a8c0) Stream removed, broadcasting: 3 I0501 17:07:45.217806 6 log.go:172] (0xc001692790) (0xc000397220) Stream removed, broadcasting: 5 May 1 17:07:45.217: INFO: Exec stderr: "" May 1 17:07:45.217: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-658 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 17:07:45.217: INFO: >>> kubeConfig: /root/.kube/config I0501 17:07:45.239435 6 log.go:172] (0xc00023d810) (0xc0013acaa0) Create stream I0501 17:07:45.239464 6 log.go:172] (0xc00023d810) (0xc0013acaa0) Stream added, broadcasting: 1 I0501 17:07:45.241261 6 log.go:172] (0xc00023d810) Reply frame received for 1 I0501 17:07:45.241296 6 log.go:172] (0xc00023d810) (0xc0013acb40) Create stream I0501 17:07:45.241309 6 log.go:172] (0xc00023d810) (0xc0013acb40) Stream added, broadcasting: 3 I0501 17:07:45.242261 6 log.go:172] (0xc00023d810) Reply frame received for 3 I0501 17:07:45.242286 6 log.go:172] (0xc00023d810) (0xc00235a960) Create stream I0501 17:07:45.242299 6 log.go:172] (0xc00023d810) (0xc00235a960) Stream added, broadcasting: 5 I0501 17:07:45.243047 6 log.go:172] (0xc00023d810) Reply frame received for 5 I0501 17:07:45.318021 6 log.go:172] (0xc00023d810) Data frame received for 3 I0501 17:07:45.318063 6 log.go:172] (0xc0013acb40) (3) Data frame handling I0501 17:07:45.318080 6 log.go:172] (0xc0013acb40) (3) Data frame sent I0501 17:07:45.318092 6 log.go:172] (0xc00023d810) Data frame received for 3 I0501 17:07:45.318101 6 log.go:172] (0xc0013acb40) (3) Data frame handling I0501 17:07:45.318125 6 log.go:172] (0xc00023d810) Data frame received for 5 I0501 17:07:45.318136 6 log.go:172] (0xc00235a960) (5) Data frame handling I0501 17:07:45.320888 6 log.go:172] (0xc00023d810) Data frame received for 1 I0501 17:07:45.320924 6 log.go:172] (0xc0013acaa0) (1) Data frame handling I0501 17:07:45.320981 6 log.go:172] (0xc0013acaa0) (1) Data frame sent I0501 17:07:45.321010 6 log.go:172] (0xc00023d810) (0xc0013acaa0) Stream removed, broadcasting: 1 I0501 17:07:45.321055 6 log.go:172] (0xc00023d810) Go away received I0501 17:07:45.321428 6 log.go:172] (0xc00023d810) (0xc0013acaa0) Stream removed, broadcasting: 1 I0501 17:07:45.321461 6 log.go:172] (0xc00023d810) (0xc0013acb40) Stream removed, broadcasting: 3 I0501 17:07:45.321485 6 log.go:172] (0xc00023d810) (0xc00235a960) Stream removed, broadcasting: 5 May 1 17:07:45.321: INFO: Exec stderr: "" May 1 17:07:45.321: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-658 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 17:07:45.321: INFO: >>> kubeConfig: /root/.kube/config I0501 17:07:45.350624 6 log.go:172] (0xc0017a3970) (0xc001c82f00) Create stream I0501 17:07:45.350676 6 log.go:172] (0xc0017a3970) (0xc001c82f00) Stream added, broadcasting: 1 I0501 17:07:45.353571 6 log.go:172] (0xc0017a3970) Reply frame received for 1 I0501 17:07:45.353623 6 log.go:172] (0xc0017a3970) (0xc000397360) Create stream I0501 17:07:45.353641 6 log.go:172] (0xc0017a3970) (0xc000397360) Stream added, broadcasting: 3 I0501 17:07:45.354445 6 log.go:172] (0xc0017a3970) Reply frame received for 3 I0501 17:07:45.354476 6 log.go:172] (0xc0017a3970) (0xc0000fe8c0) Create stream I0501 17:07:45.354487 6 log.go:172] (0xc0017a3970) (0xc0000fe8c0) Stream added, broadcasting: 5 I0501 17:07:45.355262 6 log.go:172] (0xc0017a3970) Reply frame received for 5 I0501 17:07:45.415606 6 log.go:172] (0xc0017a3970) Data frame received for 5 I0501 17:07:45.415650 6 log.go:172] (0xc0000fe8c0) (5) Data frame handling I0501 17:07:45.415675 6 log.go:172] (0xc0017a3970) Data frame received for 3 I0501 17:07:45.415705 6 log.go:172] (0xc000397360) (3) Data frame handling I0501 17:07:45.415727 6 log.go:172] (0xc000397360) (3) Data frame sent I0501 17:07:45.415737 6 log.go:172] (0xc0017a3970) Data frame received for 3 I0501 17:07:45.415749 6 log.go:172] (0xc000397360) (3) Data frame handling I0501 17:07:45.417464 6 log.go:172] (0xc0017a3970) Data frame received for 1 I0501 17:07:45.417511 6 log.go:172] (0xc001c82f00) (1) Data frame handling I0501 17:07:45.417543 6 log.go:172] (0xc001c82f00) (1) Data frame sent I0501 17:07:45.417632 6 log.go:172] (0xc0017a3970) (0xc001c82f00) Stream removed, broadcasting: 1 I0501 17:07:45.417659 6 log.go:172] (0xc0017a3970) Go away received I0501 17:07:45.417766 6 log.go:172] (0xc0017a3970) (0xc001c82f00) Stream removed, broadcasting: 1 I0501 17:07:45.417786 6 log.go:172] (0xc0017a3970) (0xc000397360) Stream removed, broadcasting: 3 I0501 17:07:45.417796 6 log.go:172] (0xc0017a3970) (0xc0000fe8c0) Stream removed, broadcasting: 5 May 1 17:07:45.417: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 1 17:07:45.417: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-658 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 17:07:45.417: INFO: >>> kubeConfig: /root/.kube/config I0501 17:07:45.465386 6 log.go:172] (0xc0023a2630) (0xc0013ad040) Create stream I0501 17:07:45.465433 6 log.go:172] (0xc0023a2630) (0xc0013ad040) Stream added, broadcasting: 1 I0501 17:07:45.468265 6 log.go:172] (0xc0023a2630) Reply frame received for 1 I0501 17:07:45.468312 6 log.go:172] (0xc0023a2630) (0xc000397400) Create stream I0501 17:07:45.468329 6 log.go:172] (0xc0023a2630) (0xc000397400) Stream added, broadcasting: 3 I0501 17:07:45.469676 6 log.go:172] (0xc0023a2630) Reply frame received for 3 I0501 17:07:45.469714 6 log.go:172] (0xc0023a2630) (0xc0013ad180) Create stream I0501 17:07:45.469729 6 log.go:172] (0xc0023a2630) (0xc0013ad180) Stream added, broadcasting: 5 I0501 17:07:45.470559 6 log.go:172] (0xc0023a2630) Reply frame received for 5 I0501 17:07:45.539044 6 log.go:172] (0xc0023a2630) Data frame received for 5 I0501 17:07:45.539068 6 log.go:172] (0xc0013ad180) (5) Data frame handling I0501 17:07:45.539086 6 log.go:172] (0xc0023a2630) Data frame received for 3 I0501 17:07:45.539094 6 log.go:172] (0xc000397400) (3) Data frame handling I0501 17:07:45.539105 6 log.go:172] (0xc000397400) (3) Data frame sent I0501 17:07:45.539113 6 log.go:172] (0xc0023a2630) Data frame received for 3 I0501 17:07:45.539127 6 log.go:172] (0xc000397400) (3) Data frame handling I0501 17:07:45.540817 6 log.go:172] (0xc0023a2630) Data frame received for 1 I0501 17:07:45.540846 6 log.go:172] (0xc0013ad040) (1) Data frame handling I0501 17:07:45.540858 6 log.go:172] (0xc0013ad040) (1) Data frame sent I0501 17:07:45.540875 6 log.go:172] (0xc0023a2630) (0xc0013ad040) Stream removed, broadcasting: 1 I0501 17:07:45.540955 6 log.go:172] (0xc0023a2630) Go away received I0501 17:07:45.541007 6 log.go:172] (0xc0023a2630) (0xc0013ad040) Stream removed, broadcasting: 1 I0501 17:07:45.541041 6 log.go:172] (0xc0023a2630) (0xc000397400) Stream removed, broadcasting: 3 I0501 17:07:45.541063 6 log.go:172] (0xc0023a2630) (0xc0013ad180) Stream removed, broadcasting: 5 May 1 17:07:45.541: INFO: Exec stderr: "" May 1 17:07:45.541: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-658 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 17:07:45.541: INFO: >>> kubeConfig: /root/.kube/config I0501 17:07:45.568569 6 log.go:172] (0xc0028f40b0) (0xc000397860) Create stream I0501 17:07:45.568598 6 log.go:172] (0xc0028f40b0) (0xc000397860) Stream added, broadcasting: 1 I0501 17:07:45.571580 6 log.go:172] (0xc0028f40b0) Reply frame received for 1 I0501 17:07:45.571627 6 log.go:172] (0xc0028f40b0) (0xc0003979a0) Create stream I0501 17:07:45.571645 6 log.go:172] (0xc0028f40b0) (0xc0003979a0) Stream added, broadcasting: 3 I0501 17:07:45.572613 6 log.go:172] (0xc0028f40b0) Reply frame received for 3 I0501 17:07:45.572660 6 log.go:172] (0xc0028f40b0) (0xc0000febe0) Create stream I0501 17:07:45.572679 6 log.go:172] (0xc0028f40b0) (0xc0000febe0) Stream added, broadcasting: 5 I0501 17:07:45.573781 6 log.go:172] (0xc0028f40b0) Reply frame received for 5 I0501 17:07:45.633108 6 log.go:172] (0xc0028f40b0) Data frame received for 5 I0501 17:07:45.633394 6 log.go:172] (0xc0000febe0) (5) Data frame handling I0501 17:07:45.633455 6 log.go:172] (0xc0028f40b0) Data frame received for 3 I0501 17:07:45.633497 6 log.go:172] (0xc0003979a0) (3) Data frame handling I0501 17:07:45.633538 6 log.go:172] (0xc0003979a0) (3) Data frame sent I0501 17:07:45.633558 6 log.go:172] (0xc0028f40b0) Data frame received for 3 I0501 17:07:45.633571 6 log.go:172] (0xc0003979a0) (3) Data frame handling I0501 17:07:45.635336 6 log.go:172] (0xc0028f40b0) Data frame received for 1 I0501 17:07:45.635366 6 log.go:172] (0xc000397860) (1) Data frame handling I0501 17:07:45.635378 6 log.go:172] (0xc000397860) (1) Data frame sent I0501 17:07:45.635392 6 log.go:172] (0xc0028f40b0) (0xc000397860) Stream removed, broadcasting: 1 I0501 17:07:45.635419 6 log.go:172] (0xc0028f40b0) Go away received I0501 17:07:45.635526 6 log.go:172] (0xc0028f40b0) (0xc000397860) Stream removed, broadcasting: 1 I0501 17:07:45.635553 6 log.go:172] (0xc0028f40b0) (0xc0003979a0) Stream removed, broadcasting: 3 I0501 17:07:45.635568 6 log.go:172] (0xc0028f40b0) (0xc0000febe0) Stream removed, broadcasting: 5 May 1 17:07:45.635: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 1 17:07:45.635: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-658 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 17:07:45.635: INFO: >>> kubeConfig: /root/.kube/config I0501 17:07:45.663939 6 log.go:172] (0xc0028f4e70) (0xc000397f40) Create stream I0501 17:07:45.663967 6 log.go:172] (0xc0028f4e70) (0xc000397f40) Stream added, broadcasting: 1 I0501 17:07:45.667396 6 log.go:172] (0xc0028f4e70) Reply frame received for 1 I0501 17:07:45.667444 6 log.go:172] (0xc0028f4e70) (0xc0000ff180) Create stream I0501 17:07:45.667463 6 log.go:172] (0xc0028f4e70) (0xc0000ff180) Stream added, broadcasting: 3 I0501 17:07:45.668494 6 log.go:172] (0xc0028f4e70) Reply frame received for 3 I0501 17:07:45.668527 6 log.go:172] (0xc0028f4e70) (0xc00235aa00) Create stream I0501 17:07:45.668541 6 log.go:172] (0xc0028f4e70) (0xc00235aa00) Stream added, broadcasting: 5 I0501 17:07:45.669706 6 log.go:172] (0xc0028f4e70) Reply frame received for 5 I0501 17:07:45.730724 6 log.go:172] (0xc0028f4e70) Data frame received for 3 I0501 17:07:45.730754 6 log.go:172] (0xc0000ff180) (3) Data frame handling I0501 17:07:45.730767 6 log.go:172] (0xc0000ff180) (3) Data frame sent I0501 17:07:45.730776 6 log.go:172] (0xc0028f4e70) Data frame received for 3 I0501 17:07:45.730794 6 log.go:172] (0xc0000ff180) (3) Data frame handling I0501 17:07:45.730804 6 log.go:172] (0xc0028f4e70) Data frame received for 5 I0501 17:07:45.730813 6 log.go:172] (0xc00235aa00) (5) Data frame handling I0501 17:07:45.732524 6 log.go:172] (0xc0028f4e70) Data frame received for 1 I0501 17:07:45.732546 6 log.go:172] (0xc000397f40) (1) Data frame handling I0501 17:07:45.732557 6 log.go:172] (0xc000397f40) (1) Data frame sent I0501 17:07:45.732569 6 log.go:172] (0xc0028f4e70) (0xc000397f40) Stream removed, broadcasting: 1 I0501 17:07:45.732597 6 log.go:172] (0xc0028f4e70) Go away received I0501 17:07:45.732656 6 log.go:172] (0xc0028f4e70) (0xc000397f40) Stream removed, broadcasting: 1 I0501 17:07:45.732714 6 log.go:172] (0xc0028f4e70) (0xc0000ff180) Stream removed, broadcasting: 3 I0501 17:07:45.732757 6 log.go:172] (0xc0028f4e70) (0xc00235aa00) Stream removed, broadcasting: 5 May 1 17:07:45.732: INFO: Exec stderr: "" May 1 17:07:45.732: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-658 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 17:07:45.732: INFO: >>> kubeConfig: /root/.kube/config I0501 17:07:45.765500 6 log.go:172] (0xc0023a3600) (0xc0013adae0) Create stream I0501 17:07:45.765535 6 log.go:172] (0xc0023a3600) (0xc0013adae0) Stream added, broadcasting: 1 I0501 17:07:45.768086 6 log.go:172] (0xc0023a3600) Reply frame received for 1 I0501 17:07:45.768122 6 log.go:172] (0xc0023a3600) (0xc0013adcc0) Create stream I0501 17:07:45.768135 6 log.go:172] (0xc0023a3600) (0xc0013adcc0) Stream added, broadcasting: 3 I0501 17:07:45.769377 6 log.go:172] (0xc0023a3600) Reply frame received for 3 I0501 17:07:45.769426 6 log.go:172] (0xc0023a3600) (0xc0033ea000) Create stream I0501 17:07:45.769439 6 log.go:172] (0xc0023a3600) (0xc0033ea000) Stream added, broadcasting: 5 I0501 17:07:45.770640 6 log.go:172] (0xc0023a3600) Reply frame received for 5 I0501 17:07:45.839701 6 log.go:172] (0xc0023a3600) Data frame received for 3 I0501 17:07:45.839739 6 log.go:172] (0xc0013adcc0) (3) Data frame handling I0501 17:07:45.839760 6 log.go:172] (0xc0013adcc0) (3) Data frame sent I0501 17:07:45.839782 6 log.go:172] (0xc0023a3600) Data frame received for 3 I0501 17:07:45.839813 6 log.go:172] (0xc0013adcc0) (3) Data frame handling I0501 17:07:45.839845 6 log.go:172] (0xc0023a3600) Data frame received for 5 I0501 17:07:45.839856 6 log.go:172] (0xc0033ea000) (5) Data frame handling I0501 17:07:45.841531 6 log.go:172] (0xc0023a3600) Data frame received for 1 I0501 17:07:45.841553 6 log.go:172] (0xc0013adae0) (1) Data frame handling I0501 17:07:45.841575 6 log.go:172] (0xc0013adae0) (1) Data frame sent I0501 17:07:45.841603 6 log.go:172] (0xc0023a3600) (0xc0013adae0) Stream removed, broadcasting: 1 I0501 17:07:45.841697 6 log.go:172] (0xc0023a3600) (0xc0013adae0) Stream removed, broadcasting: 1 I0501 17:07:45.841736 6 log.go:172] (0xc0023a3600) (0xc0013adcc0) Stream removed, broadcasting: 3 I0501 17:07:45.841758 6 log.go:172] (0xc0023a3600) (0xc0033ea000) Stream removed, broadcasting: 5 May 1 17:07:45.841: INFO: Exec stderr: "" I0501 17:07:45.841797 6 log.go:172] (0xc0023a3600) Go away received May 1 17:07:45.841: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-658 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 17:07:45.841: INFO: >>> kubeConfig: /root/.kube/config I0501 17:07:45.875915 6 log.go:172] (0xc002f1c370) (0xc0017c0280) Create stream I0501 17:07:45.875939 6 log.go:172] (0xc002f1c370) (0xc0017c0280) Stream added, broadcasting: 1 I0501 17:07:45.879413 6 log.go:172] (0xc002f1c370) Reply frame received for 1 I0501 17:07:45.879461 6 log.go:172] (0xc002f1c370) (0xc0000ff2c0) Create stream I0501 17:07:45.879477 6 log.go:172] (0xc002f1c370) (0xc0000ff2c0) Stream added, broadcasting: 3 I0501 17:07:45.880596 6 log.go:172] (0xc002f1c370) Reply frame received for 3 I0501 17:07:45.880656 6 log.go:172] (0xc002f1c370) (0xc001c83040) Create stream I0501 17:07:45.880685 6 log.go:172] (0xc002f1c370) (0xc001c83040) Stream added, broadcasting: 5 I0501 17:07:45.881889 6 log.go:172] (0xc002f1c370) Reply frame received for 5 I0501 17:07:45.946751 6 log.go:172] (0xc002f1c370) Data frame received for 5 I0501 17:07:45.946809 6 log.go:172] (0xc001c83040) (5) Data frame handling I0501 17:07:45.946848 6 log.go:172] (0xc002f1c370) Data frame received for 3 I0501 17:07:45.946868 6 log.go:172] (0xc0000ff2c0) (3) Data frame handling I0501 17:07:45.946889 6 log.go:172] (0xc0000ff2c0) (3) Data frame sent I0501 17:07:45.946918 6 log.go:172] (0xc002f1c370) Data frame received for 3 I0501 17:07:45.946945 6 log.go:172] (0xc0000ff2c0) (3) Data frame handling I0501 17:07:45.948518 6 log.go:172] (0xc002f1c370) Data frame received for 1 I0501 17:07:45.948546 6 log.go:172] (0xc0017c0280) (1) Data frame handling I0501 17:07:45.948565 6 log.go:172] (0xc0017c0280) (1) Data frame sent I0501 17:07:45.948853 6 log.go:172] (0xc002f1c370) (0xc0017c0280) Stream removed, broadcasting: 1 I0501 17:07:45.948895 6 log.go:172] (0xc002f1c370) Go away received I0501 17:07:45.949020 6 log.go:172] (0xc002f1c370) (0xc0017c0280) Stream removed, broadcasting: 1 I0501 17:07:45.949048 6 log.go:172] (0xc002f1c370) (0xc0000ff2c0) Stream removed, broadcasting: 3 I0501 17:07:45.949061 6 log.go:172] (0xc002f1c370) (0xc001c83040) Stream removed, broadcasting: 5 May 1 17:07:45.949: INFO: Exec stderr: "" May 1 17:07:45.949: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-658 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 17:07:45.949: INFO: >>> kubeConfig: /root/.kube/config I0501 17:07:45.980302 6 log.go:172] (0xc0009fbd90) (0xc0000ffe00) Create stream I0501 17:07:45.980327 6 log.go:172] (0xc0009fbd90) (0xc0000ffe00) Stream added, broadcasting: 1 I0501 17:07:45.982974 6 log.go:172] (0xc0009fbd90) Reply frame received for 1 I0501 17:07:45.983019 6 log.go:172] (0xc0009fbd90) (0xc001c83180) Create stream I0501 17:07:45.983035 6 log.go:172] (0xc0009fbd90) (0xc001c83180) Stream added, broadcasting: 3 I0501 17:07:45.983800 6 log.go:172] (0xc0009fbd90) Reply frame received for 3 I0501 17:07:45.983831 6 log.go:172] (0xc0009fbd90) (0xc0017c0640) Create stream I0501 17:07:45.983843 6 log.go:172] (0xc0009fbd90) (0xc0017c0640) Stream added, broadcasting: 5 I0501 17:07:45.984636 6 log.go:172] (0xc0009fbd90) Reply frame received for 5 I0501 17:07:46.041486 6 log.go:172] (0xc0009fbd90) Data frame received for 5 I0501 17:07:46.041536 6 log.go:172] (0xc0017c0640) (5) Data frame handling I0501 17:07:46.041565 6 log.go:172] (0xc0009fbd90) Data frame received for 3 I0501 17:07:46.041594 6 log.go:172] (0xc001c83180) (3) Data frame handling I0501 17:07:46.041624 6 log.go:172] (0xc001c83180) (3) Data frame sent I0501 17:07:46.041658 6 log.go:172] (0xc0009fbd90) Data frame received for 3 I0501 17:07:46.041674 6 log.go:172] (0xc001c83180) (3) Data frame handling I0501 17:07:46.042919 6 log.go:172] (0xc0009fbd90) Data frame received for 1 I0501 17:07:46.042959 6 log.go:172] (0xc0000ffe00) (1) Data frame handling I0501 17:07:46.042994 6 log.go:172] (0xc0000ffe00) (1) Data frame sent I0501 17:07:46.043017 6 log.go:172] (0xc0009fbd90) (0xc0000ffe00) Stream removed, broadcasting: 1 I0501 17:07:46.043040 6 log.go:172] (0xc0009fbd90) Go away received I0501 17:07:46.043175 6 log.go:172] (0xc0009fbd90) (0xc0000ffe00) Stream removed, broadcasting: 1 I0501 17:07:46.043202 6 log.go:172] (0xc0009fbd90) (0xc001c83180) Stream removed, broadcasting: 3 I0501 17:07:46.043222 6 log.go:172] (0xc0009fbd90) (0xc0017c0640) Stream removed, broadcasting: 5 May 1 17:07:46.043: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:07:46.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-658" for this suite. May 1 17:08:26.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:08:26.142: INFO: namespace e2e-kubelet-etc-hosts-658 deletion completed in 40.094697216s • [SLOW TEST:51.248 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:08:26.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-7116 STEP: creating a selector STEP: Creating the service pods in kubernetes May 1 17:08:26.219: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 1 17:08:52.340: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.7:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7116 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 17:08:52.340: INFO: >>> kubeConfig: /root/.kube/config I0501 17:08:52.374521 6 log.go:172] (0xc002a136b0) (0xc000ecff40) Create stream I0501 17:08:52.374554 6 log.go:172] (0xc002a136b0) (0xc000ecff40) Stream added, broadcasting: 1 I0501 17:08:52.376835 6 log.go:172] (0xc002a136b0) Reply frame received for 1 I0501 17:08:52.376871 6 log.go:172] (0xc002a136b0) (0xc00309fae0) Create stream I0501 17:08:52.376881 6 log.go:172] (0xc002a136b0) (0xc00309fae0) Stream added, broadcasting: 3 I0501 17:08:52.379027 6 log.go:172] (0xc002a136b0) Reply frame received for 3 I0501 17:08:52.379074 6 log.go:172] (0xc002a136b0) (0xc001d64000) Create stream I0501 17:08:52.379088 6 log.go:172] (0xc002a136b0) (0xc001d64000) Stream added, broadcasting: 5 I0501 17:08:52.380097 6 log.go:172] (0xc002a136b0) Reply frame received for 5 I0501 17:08:52.486655 6 log.go:172] (0xc002a136b0) Data frame received for 3 I0501 17:08:52.486689 6 log.go:172] (0xc00309fae0) (3) Data frame handling I0501 17:08:52.486702 6 log.go:172] (0xc00309fae0) (3) Data frame sent I0501 17:08:52.486980 6 log.go:172] (0xc002a136b0) Data frame received for 3 I0501 17:08:52.486995 6 log.go:172] (0xc00309fae0) (3) Data frame handling I0501 17:08:52.487027 6 log.go:172] (0xc002a136b0) Data frame received for 5 I0501 17:08:52.487037 6 log.go:172] (0xc001d64000) (5) Data frame handling I0501 17:08:52.487948 6 log.go:172] (0xc002a136b0) Data frame received for 1 I0501 17:08:52.487973 6 log.go:172] (0xc000ecff40) (1) Data frame handling I0501 17:08:52.488000 6 log.go:172] (0xc000ecff40) (1) Data frame sent I0501 17:08:52.488013 6 log.go:172] (0xc002a136b0) (0xc000ecff40) Stream removed, broadcasting: 1 I0501 17:08:52.488090 6 log.go:172] (0xc002a136b0) (0xc000ecff40) Stream removed, broadcasting: 1 I0501 17:08:52.488103 6 log.go:172] (0xc002a136b0) (0xc00309fae0) Stream removed, broadcasting: 3 I0501 17:08:52.488112 6 log.go:172] (0xc002a136b0) (0xc001d64000) Stream removed, broadcasting: 5 May 1 17:08:52.488: INFO: Found all expected endpoints: [netserver-0] I0501 17:08:52.488395 6 log.go:172] (0xc002a136b0) Go away received May 1 17:08:52.491: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.167:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7116 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 17:08:52.491: INFO: >>> kubeConfig: /root/.kube/config I0501 17:08:52.566186 6 log.go:172] (0xc00023c6e0) (0xc0027fc0a0) Create stream I0501 17:08:52.566222 6 log.go:172] (0xc00023c6e0) (0xc0027fc0a0) Stream added, broadcasting: 1 I0501 17:08:52.568234 6 log.go:172] (0xc00023c6e0) Reply frame received for 1 I0501 17:08:52.568280 6 log.go:172] (0xc00023c6e0) (0xc0013ac000) Create stream I0501 17:08:52.568295 6 log.go:172] (0xc00023c6e0) (0xc0013ac000) Stream added, broadcasting: 3 I0501 17:08:52.568970 6 log.go:172] (0xc00023c6e0) Reply frame received for 3 I0501 17:08:52.569009 6 log.go:172] (0xc00023c6e0) (0xc0013ac140) Create stream I0501 17:08:52.569022 6 log.go:172] (0xc00023c6e0) (0xc0013ac140) Stream added, broadcasting: 5 I0501 17:08:52.569783 6 log.go:172] (0xc00023c6e0) Reply frame received for 5 I0501 17:08:52.625854 6 log.go:172] (0xc00023c6e0) Data frame received for 5 I0501 17:08:52.625894 6 log.go:172] (0xc0013ac140) (5) Data frame handling I0501 17:08:52.625912 6 log.go:172] (0xc00023c6e0) Data frame received for 3 I0501 17:08:52.625919 6 log.go:172] (0xc0013ac000) (3) Data frame handling I0501 17:08:52.625925 6 log.go:172] (0xc0013ac000) (3) Data frame sent I0501 17:08:52.626463 6 log.go:172] (0xc00023c6e0) Data frame received for 3 I0501 17:08:52.626486 6 log.go:172] (0xc0013ac000) (3) Data frame handling I0501 17:08:52.627336 6 log.go:172] (0xc00023c6e0) Data frame received for 1 I0501 17:08:52.627347 6 log.go:172] (0xc0027fc0a0) (1) Data frame handling I0501 17:08:52.627353 6 log.go:172] (0xc0027fc0a0) (1) Data frame sent I0501 17:08:52.627362 6 log.go:172] (0xc00023c6e0) (0xc0027fc0a0) Stream removed, broadcasting: 1 I0501 17:08:52.627379 6 log.go:172] (0xc00023c6e0) Go away received I0501 17:08:52.627501 6 log.go:172] (0xc00023c6e0) (0xc0027fc0a0) Stream removed, broadcasting: 1 I0501 17:08:52.627526 6 log.go:172] (0xc00023c6e0) (0xc0013ac000) Stream removed, broadcasting: 3 I0501 17:08:52.627543 6 log.go:172] (0xc00023c6e0) (0xc0013ac140) Stream removed, broadcasting: 5 May 1 17:08:52.627: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:08:52.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7116" for this suite. May 1 17:09:16.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:09:16.786: INFO: namespace pod-network-test-7116 deletion completed in 24.114807114s • [SLOW TEST:50.643 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:09:16.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-51f290df-6579-416d-b336-8149eeb50fe6 STEP: Creating a pod to test consume configMaps May 1 17:09:16.910: INFO: Waiting up to 5m0s for pod "pod-configmaps-6dab924d-12b8-4374-a67f-24cc3d768717" in namespace "configmap-993" to be "success or failure" May 1 17:09:16.913: INFO: Pod "pod-configmaps-6dab924d-12b8-4374-a67f-24cc3d768717": Phase="Pending", Reason="", readiness=false. Elapsed: 3.360351ms May 1 17:09:18.917: INFO: Pod "pod-configmaps-6dab924d-12b8-4374-a67f-24cc3d768717": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007223382s May 1 17:09:20.922: INFO: Pod "pod-configmaps-6dab924d-12b8-4374-a67f-24cc3d768717": Phase="Running", Reason="", readiness=true. Elapsed: 4.011844926s May 1 17:09:22.926: INFO: Pod "pod-configmaps-6dab924d-12b8-4374-a67f-24cc3d768717": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016218414s STEP: Saw pod success May 1 17:09:22.926: INFO: Pod "pod-configmaps-6dab924d-12b8-4374-a67f-24cc3d768717" satisfied condition "success or failure" May 1 17:09:22.929: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-6dab924d-12b8-4374-a67f-24cc3d768717 container configmap-volume-test: STEP: delete the pod May 1 17:09:22.951: INFO: Waiting for pod pod-configmaps-6dab924d-12b8-4374-a67f-24cc3d768717 to disappear May 1 17:09:23.018: INFO: Pod pod-configmaps-6dab924d-12b8-4374-a67f-24cc3d768717 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:09:23.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-993" for this suite. May 1 17:09:29.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:09:29.147: INFO: namespace configmap-993 deletion completed in 6.125484492s • [SLOW TEST:12.361 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:09:29.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 17:09:29.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1121' May 1 17:09:33.340: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 1 17:09:33.341: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 1 17:09:33.357: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 1 17:09:33.362: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 1 17:09:33.398: INFO: scanned /root for discovery docs: May 1 17:09:33.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1121' May 1 17:09:50.447: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 1 17:09:50.447: INFO: stdout: "Created e2e-test-nginx-rc-71dd8d73e2de2d584b44f724e99138bc\nScaling up e2e-test-nginx-rc-71dd8d73e2de2d584b44f724e99138bc from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-71dd8d73e2de2d584b44f724e99138bc up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-71dd8d73e2de2d584b44f724e99138bc to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 1 17:09:50.447: INFO: stdout: "Created e2e-test-nginx-rc-71dd8d73e2de2d584b44f724e99138bc\nScaling up e2e-test-nginx-rc-71dd8d73e2de2d584b44f724e99138bc from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-71dd8d73e2de2d584b44f724e99138bc up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-71dd8d73e2de2d584b44f724e99138bc to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 1 17:09:50.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-1121' May 1 17:09:50.551: INFO: stderr: "" May 1 17:09:50.551: INFO: stdout: "e2e-test-nginx-rc-71dd8d73e2de2d584b44f724e99138bc-klpbq e2e-test-nginx-rc-jkszl " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 May 1 17:09:55.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-1121' May 1 17:09:55.654: INFO: stderr: "" May 1 17:09:55.654: INFO: stdout: "e2e-test-nginx-rc-71dd8d73e2de2d584b44f724e99138bc-klpbq " May 1 17:09:55.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-71dd8d73e2de2d584b44f724e99138bc-klpbq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1121' May 1 17:09:55.741: INFO: stderr: "" May 1 17:09:55.741: INFO: stdout: "true" May 1 17:09:55.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-71dd8d73e2de2d584b44f724e99138bc-klpbq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1121' May 1 17:09:55.842: INFO: stderr: "" May 1 17:09:55.842: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 1 17:09:55.842: INFO: e2e-test-nginx-rc-71dd8d73e2de2d584b44f724e99138bc-klpbq is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 May 1 17:09:55.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1121' May 1 17:09:55.948: INFO: stderr: "" May 1 17:09:55.948: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:09:55.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1121" for this suite. May 1 17:10:17.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:10:18.038: INFO: namespace kubectl-1121 deletion completed in 22.087294396s • [SLOW TEST:48.891 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:10:18.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 1 17:10:18.528: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 17:10:18.609: INFO: Number of nodes with available pods: 0 May 1 17:10:18.609: INFO: Node iruya-worker is running more than one daemon pod May 1 17:10:19.613: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 17:10:19.617: INFO: Number of nodes with available pods: 0 May 1 17:10:19.617: INFO: Node iruya-worker is running more than one daemon pod May 1 17:10:21.489: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 17:10:21.556: INFO: Number of nodes with available pods: 0 May 1 17:10:21.556: INFO: Node iruya-worker is running more than one daemon pod May 1 17:10:21.671: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 17:10:21.717: INFO: Number of nodes with available pods: 0 May 1 17:10:21.717: INFO: Node iruya-worker is running more than one daemon pod May 1 17:10:22.615: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 17:10:22.618: INFO: Number of nodes with available pods: 0 May 1 17:10:22.618: INFO: Node iruya-worker is running more than one daemon pod May 1 17:10:23.618: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 17:10:23.622: INFO: Number of nodes with available pods: 1 May 1 17:10:23.622: INFO: Node iruya-worker2 is running more than one daemon pod May 1 17:10:24.614: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 17:10:24.617: INFO: Number of nodes with available pods: 2 May 1 17:10:24.617: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 1 17:10:24.640: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 17:10:24.645: INFO: Number of nodes with available pods: 2 May 1 17:10:24.645: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2756, will wait for the garbage collector to delete the pods May 1 17:10:26.074: INFO: Deleting DaemonSet.extensions daemon-set took: 164.667609ms May 1 17:10:26.374: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.20667ms May 1 17:10:30.383: INFO: Number of nodes with available pods: 0 May 1 17:10:30.383: INFO: Number of running nodes: 0, number of available pods: 0 May 1 17:10:30.386: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2756/daemonsets","resourceVersion":"8478130"},"items":null} May 1 17:10:30.388: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2756/pods","resourceVersion":"8478130"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:10:30.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2756" for this suite. May 1 17:10:36.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:10:36.500: INFO: namespace daemonsets-2756 deletion completed in 6.099121172s • [SLOW TEST:18.460 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:10:36.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 1 17:10:36.565: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3334c3a7-d2ba-49bf-96ea-80a1a36072f3" in namespace "downward-api-9714" to be "success or failure" May 1 17:10:36.597: INFO: Pod "downwardapi-volume-3334c3a7-d2ba-49bf-96ea-80a1a36072f3": Phase="Pending", Reason="", readiness=false. Elapsed: 31.873571ms May 1 17:10:38.602: INFO: Pod "downwardapi-volume-3334c3a7-d2ba-49bf-96ea-80a1a36072f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036794219s May 1 17:10:40.606: INFO: Pod "downwardapi-volume-3334c3a7-d2ba-49bf-96ea-80a1a36072f3": Phase="Running", Reason="", readiness=true. Elapsed: 4.040927161s May 1 17:10:42.611: INFO: Pod "downwardapi-volume-3334c3a7-d2ba-49bf-96ea-80a1a36072f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045769894s STEP: Saw pod success May 1 17:10:42.611: INFO: Pod "downwardapi-volume-3334c3a7-d2ba-49bf-96ea-80a1a36072f3" satisfied condition "success or failure" May 1 17:10:42.614: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-3334c3a7-d2ba-49bf-96ea-80a1a36072f3 container client-container: STEP: delete the pod May 1 17:10:42.650: INFO: Waiting for pod downwardapi-volume-3334c3a7-d2ba-49bf-96ea-80a1a36072f3 to disappear May 1 17:10:42.652: INFO: Pod downwardapi-volume-3334c3a7-d2ba-49bf-96ea-80a1a36072f3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:10:42.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9714" for this suite. May 1 17:10:48.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:10:48.768: INFO: namespace downward-api-9714 deletion completed in 6.095620702s • [SLOW TEST:12.268 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:10:48.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 17:10:48.868: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 1 17:10:50.944: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:10:51.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5965" for this suite. May 1 17:11:00.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:11:00.365: INFO: namespace replication-controller-5965 deletion completed in 8.385684425s • [SLOW TEST:11.597 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:11:00.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7644, will wait for the garbage collector to delete the pods May 1 17:11:06.497: INFO: Deleting Job.batch foo took: 7.934804ms May 1 17:11:06.597: INFO: Terminating Job.batch foo pods took: 100.284265ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:11:52.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7644" for this suite. May 1 17:11:58.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:11:58.434: INFO: namespace job-7644 deletion completed in 6.128672159s • [SLOW TEST:58.069 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:11:58.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 1 17:11:58.621: INFO: Create a RollingUpdate DaemonSet May 1 17:11:58.625: INFO: Check that daemon pods launch on every node of the cluster May 1 17:11:58.674: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 17:11:58.677: INFO: Number of nodes with available pods: 0 May 1 17:11:58.677: INFO: Node iruya-worker is running more than one daemon pod May 1 17:11:59.682: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 17:11:59.686: INFO: Number of nodes with available pods: 0 May 1 17:11:59.686: INFO: Node iruya-worker is running more than one daemon pod May 1 17:12:00.861: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 17:12:00.864: INFO: Number of nodes with available pods: 0 May 1 17:12:00.864: INFO: Node iruya-worker is running more than one daemon pod May 1 17:12:01.722: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 17:12:01.724: INFO: Number of nodes with available pods: 0 May 1 17:12:01.724: INFO: Node iruya-worker is running more than one daemon pod May 1 17:12:02.682: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 17:12:02.686: INFO: Number of nodes with available pods: 2 May 1 17:12:02.686: INFO: Number of running nodes: 2, number of available pods: 2 May 1 17:12:02.686: INFO: Update the DaemonSet to trigger a rollout May 1 17:12:02.693: INFO: Updating DaemonSet daemon-set May 1 17:12:12.715: INFO: Roll back the DaemonSet before rollout is complete May 1 17:12:12.720: INFO: Updating DaemonSet daemon-set May 1 17:12:12.720: INFO: Make sure DaemonSet rollback is complete May 1 17:12:12.731: INFO: Wrong image for pod: daemon-set-8m45s. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 1 17:12:12.731: INFO: Pod daemon-set-8m45s is not available May 1 17:12:12.738: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 17:12:13.743: INFO: Wrong image for pod: daemon-set-8m45s. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 1 17:12:13.743: INFO: Pod daemon-set-8m45s is not available May 1 17:12:13.747: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 17:12:14.743: INFO: Pod daemon-set-qp5pt is not available May 1 17:12:14.748: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6696, will wait for the garbage collector to delete the pods May 1 17:12:14.813: INFO: Deleting DaemonSet.extensions daemon-set took: 5.787876ms May 1 17:12:15.113: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.298281ms May 1 17:12:22.266: INFO: Number of nodes with available pods: 0 May 1 17:12:22.266: INFO: Number of running nodes: 0, number of available pods: 0 May 1 17:12:22.268: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6696/daemonsets","resourceVersion":"8478578"},"items":null} May 1 17:12:22.269: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6696/pods","resourceVersion":"8478578"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:12:22.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6696" for this suite. May 1 17:12:28.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:12:28.396: INFO: namespace daemonsets-6696 deletion completed in 6.114774087s • [SLOW TEST:29.961 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:12:28.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-2083/secret-test-b009d3d1-ccec-413c-96c9-1be64a91805b STEP: Creating a pod to test consume secrets May 1 17:12:28.494: INFO: Waiting up to 5m0s for pod "pod-configmaps-3fe093c6-c8b1-4ddc-8892-440f5324fa83" in namespace "secrets-2083" to be "success or failure" May 1 17:12:28.498: INFO: Pod "pod-configmaps-3fe093c6-c8b1-4ddc-8892-440f5324fa83": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232316ms May 1 17:12:30.502: INFO: Pod "pod-configmaps-3fe093c6-c8b1-4ddc-8892-440f5324fa83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008037515s May 1 17:12:32.506: INFO: Pod "pod-configmaps-3fe093c6-c8b1-4ddc-8892-440f5324fa83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011716538s STEP: Saw pod success May 1 17:12:32.506: INFO: Pod "pod-configmaps-3fe093c6-c8b1-4ddc-8892-440f5324fa83" satisfied condition "success or failure" May 1 17:12:32.508: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-3fe093c6-c8b1-4ddc-8892-440f5324fa83 container env-test: STEP: delete the pod May 1 17:12:32.530: INFO: Waiting for pod pod-configmaps-3fe093c6-c8b1-4ddc-8892-440f5324fa83 to disappear May 1 17:12:32.534: INFO: Pod pod-configmaps-3fe093c6-c8b1-4ddc-8892-440f5324fa83 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:12:32.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2083" for this suite. May 1 17:12:38.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:12:38.627: INFO: namespace secrets-2083 deletion completed in 6.089635554s • [SLOW TEST:10.231 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:12:38.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 1 17:12:46.842: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 17:12:46.858: INFO: Pod pod-with-prestop-http-hook still exists May 1 17:12:48.858: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 17:12:48.862: INFO: Pod pod-with-prestop-http-hook still exists May 1 17:12:50.858: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 17:12:50.862: INFO: Pod pod-with-prestop-http-hook still exists May 1 17:12:52.858: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 17:12:52.863: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:12:52.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9221" for this suite. May 1 17:13:14.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:13:15.014: INFO: namespace container-lifecycle-hook-9221 deletion completed in 22.140431317s • [SLOW TEST:36.387 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:13:15.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-595ecbe8-0c2f-4453-b651-6c0e76d29d76 STEP: Creating a pod to test consume configMaps May 1 17:13:15.136: INFO: Waiting up to 5m0s for pod "pod-configmaps-674fa3ac-b3c8-4c48-85ac-8fd2f3d42d25" in namespace "configmap-1620" to be "success or failure" May 1 17:13:15.140: INFO: Pod "pod-configmaps-674fa3ac-b3c8-4c48-85ac-8fd2f3d42d25": Phase="Pending", Reason="", readiness=false. Elapsed: 3.731397ms May 1 17:13:17.144: INFO: Pod "pod-configmaps-674fa3ac-b3c8-4c48-85ac-8fd2f3d42d25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007879196s May 1 17:13:19.149: INFO: Pod "pod-configmaps-674fa3ac-b3c8-4c48-85ac-8fd2f3d42d25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012576246s STEP: Saw pod success May 1 17:13:19.149: INFO: Pod "pod-configmaps-674fa3ac-b3c8-4c48-85ac-8fd2f3d42d25" satisfied condition "success or failure" May 1 17:13:19.152: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-674fa3ac-b3c8-4c48-85ac-8fd2f3d42d25 container configmap-volume-test: STEP: delete the pod May 1 17:13:19.214: INFO: Waiting for pod pod-configmaps-674fa3ac-b3c8-4c48-85ac-8fd2f3d42d25 to disappear May 1 17:13:19.217: INFO: Pod pod-configmaps-674fa3ac-b3c8-4c48-85ac-8fd2f3d42d25 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:13:19.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1620" for this suite. May 1 17:13:25.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:13:25.342: INFO: namespace configmap-1620 deletion completed in 6.122305836s • [SLOW TEST:10.327 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:13:25.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 1 17:13:25.509: INFO: Waiting up to 5m0s for pod "downward-api-239918fd-d606-446c-a29d-9f32ccca2eca" in namespace "downward-api-4460" to be "success or failure" May 1 17:13:25.517: INFO: Pod "downward-api-239918fd-d606-446c-a29d-9f32ccca2eca": Phase="Pending", Reason="", readiness=false. Elapsed: 7.950555ms May 1 17:13:27.522: INFO: Pod "downward-api-239918fd-d606-446c-a29d-9f32ccca2eca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012178178s May 1 17:13:29.525: INFO: Pod "downward-api-239918fd-d606-446c-a29d-9f32ccca2eca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016142357s STEP: Saw pod success May 1 17:13:29.526: INFO: Pod "downward-api-239918fd-d606-446c-a29d-9f32ccca2eca" satisfied condition "success or failure" May 1 17:13:29.528: INFO: Trying to get logs from node iruya-worker2 pod downward-api-239918fd-d606-446c-a29d-9f32ccca2eca container dapi-container: STEP: delete the pod May 1 17:13:29.549: INFO: Waiting for pod downward-api-239918fd-d606-446c-a29d-9f32ccca2eca to disappear May 1 17:13:29.553: INFO: Pod downward-api-239918fd-d606-446c-a29d-9f32ccca2eca no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:13:29.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4460" for this suite. May 1 17:13:35.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:13:35.639: INFO: namespace downward-api-4460 deletion completed in 6.083325228s • [SLOW TEST:10.297 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:13:35.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 1 17:13:35.742: INFO: Waiting up to 5m0s for pod "pod-2f2d7471-c58f-4479-a616-058f00a73d39" in namespace "emptydir-1031" to be "success or failure" May 1 17:13:35.751: INFO: Pod "pod-2f2d7471-c58f-4479-a616-058f00a73d39": Phase="Pending", Reason="", readiness=false. Elapsed: 9.247611ms May 1 17:13:37.756: INFO: Pod "pod-2f2d7471-c58f-4479-a616-058f00a73d39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013655433s May 1 17:13:39.759: INFO: Pod "pod-2f2d7471-c58f-4479-a616-058f00a73d39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017111776s STEP: Saw pod success May 1 17:13:39.759: INFO: Pod "pod-2f2d7471-c58f-4479-a616-058f00a73d39" satisfied condition "success or failure" May 1 17:13:39.762: INFO: Trying to get logs from node iruya-worker2 pod pod-2f2d7471-c58f-4479-a616-058f00a73d39 container test-container: STEP: delete the pod May 1 17:13:39.798: INFO: Waiting for pod pod-2f2d7471-c58f-4479-a616-058f00a73d39 to disappear May 1 17:13:39.830: INFO: Pod pod-2f2d7471-c58f-4479-a616-058f00a73d39 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:13:39.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1031" for this suite. May 1 17:13:45.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:13:45.948: INFO: namespace emptydir-1031 deletion completed in 6.114312372s • [SLOW TEST:10.309 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:13:45.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1933 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-1933 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1933 May 1 17:13:46.118: INFO: Found 0 stateful pods, waiting for 1 May 1 17:13:56.467: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 1 17:13:56.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 17:13:56.865: INFO: stderr: "I0501 17:13:56.698133 2974 log.go:172] (0xc000978420) (0xc00036e820) Create stream\nI0501 17:13:56.698196 2974 log.go:172] (0xc000978420) (0xc00036e820) Stream added, broadcasting: 1\nI0501 17:13:56.700646 2974 log.go:172] (0xc000978420) Reply frame received for 1\nI0501 17:13:56.700700 2974 log.go:172] (0xc000978420) (0xc00067c140) Create stream\nI0501 17:13:56.700722 2974 log.go:172] (0xc000978420) (0xc00067c140) Stream added, broadcasting: 3\nI0501 17:13:56.701734 2974 log.go:172] (0xc000978420) Reply frame received for 3\nI0501 17:13:56.701760 2974 log.go:172] (0xc000978420) (0xc00036e000) Create stream\nI0501 17:13:56.701767 2974 log.go:172] (0xc000978420) (0xc00036e000) Stream added, broadcasting: 5\nI0501 17:13:56.702502 2974 log.go:172] (0xc000978420) Reply frame received for 5\nI0501 17:13:56.781580 2974 log.go:172] (0xc000978420) Data frame received for 5\nI0501 17:13:56.781609 2974 log.go:172] (0xc00036e000) (5) Data frame handling\nI0501 17:13:56.781627 2974 log.go:172] (0xc00036e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0501 17:13:56.857709 2974 log.go:172] (0xc000978420) Data frame received for 3\nI0501 17:13:56.857746 2974 log.go:172] (0xc00067c140) (3) Data frame handling\nI0501 17:13:56.857760 2974 log.go:172] (0xc00067c140) (3) Data frame sent\nI0501 17:13:56.857772 2974 log.go:172] (0xc000978420) Data frame received for 3\nI0501 17:13:56.857786 2974 log.go:172] (0xc00067c140) (3) Data frame handling\nI0501 17:13:56.857817 2974 log.go:172] (0xc000978420) Data frame received for 5\nI0501 17:13:56.857902 2974 log.go:172] (0xc00036e000) (5) Data frame handling\nI0501 17:13:56.859572 2974 log.go:172] (0xc000978420) Data frame received for 1\nI0501 17:13:56.859603 2974 log.go:172] (0xc00036e820) (1) Data frame handling\nI0501 17:13:56.859629 2974 log.go:172] (0xc00036e820) (1) Data frame sent\nI0501 17:13:56.859649 2974 log.go:172] (0xc000978420) (0xc00036e820) Stream removed, broadcasting: 1\nI0501 17:13:56.859667 2974 log.go:172] (0xc000978420) Go away received\nI0501 17:13:56.860129 2974 log.go:172] (0xc000978420) (0xc00036e820) Stream removed, broadcasting: 1\nI0501 17:13:56.860154 2974 log.go:172] (0xc000978420) (0xc00067c140) Stream removed, broadcasting: 3\nI0501 17:13:56.860166 2974 log.go:172] (0xc000978420) (0xc00036e000) Stream removed, broadcasting: 5\n" May 1 17:13:56.865: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 17:13:56.865: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 17:13:56.870: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 1 17:14:06.875: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 1 17:14:06.875: INFO: Waiting for statefulset status.replicas updated to 0 May 1 17:14:06.889: INFO: POD NODE PHASE GRACE CONDITIONS May 1 17:14:06.889: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC }] May 1 17:14:06.889: INFO: May 1 17:14:06.889: INFO: StatefulSet ss has not reached scale 3, at 1 May 1 17:14:07.908: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993757336s May 1 17:14:08.914: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.974844056s May 1 17:14:10.137: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.969319109s May 1 17:14:11.371: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.746003218s May 1 17:14:12.502: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.512192473s May 1 17:14:13.712: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.381394209s May 1 17:14:14.716: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.171182206s May 1 17:14:15.760: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.166330536s May 1 17:14:16.765: INFO: Verifying statefulset ss doesn't scale past 3 for another 123.338668ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1933 May 1 17:14:17.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:14:17.971: INFO: stderr: "I0501 17:14:17.906650 2994 log.go:172] (0xc0009e2420) (0xc0009c86e0) Create stream\nI0501 17:14:17.906703 2994 log.go:172] (0xc0009e2420) (0xc0009c86e0) Stream added, broadcasting: 1\nI0501 17:14:17.908896 2994 log.go:172] (0xc0009e2420) Reply frame received for 1\nI0501 17:14:17.908941 2994 log.go:172] (0xc0009e2420) (0xc0006b00a0) Create stream\nI0501 17:14:17.908956 2994 log.go:172] (0xc0009e2420) (0xc0006b00a0) Stream added, broadcasting: 3\nI0501 17:14:17.910122 2994 log.go:172] (0xc0009e2420) Reply frame received for 3\nI0501 17:14:17.910154 2994 log.go:172] (0xc0009e2420) (0xc0009c8780) Create stream\nI0501 17:14:17.910161 2994 log.go:172] (0xc0009e2420) (0xc0009c8780) Stream added, broadcasting: 5\nI0501 17:14:17.911057 2994 log.go:172] (0xc0009e2420) Reply frame received for 5\nI0501 17:14:17.965019 2994 log.go:172] (0xc0009e2420) Data frame received for 5\nI0501 17:14:17.965081 2994 log.go:172] (0xc0009c8780) (5) Data frame handling\nI0501 17:14:17.965325 2994 log.go:172] (0xc0009c8780) (5) Data frame sent\nI0501 17:14:17.965363 2994 log.go:172] (0xc0009e2420) Data frame received for 5\nI0501 17:14:17.965380 2994 log.go:172] (0xc0009c8780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0501 17:14:17.965455 2994 log.go:172] (0xc0009e2420) Data frame received for 3\nI0501 17:14:17.965485 2994 log.go:172] (0xc0006b00a0) (3) Data frame handling\nI0501 17:14:17.965500 2994 log.go:172] (0xc0006b00a0) (3) Data frame sent\nI0501 17:14:17.965512 2994 log.go:172] (0xc0009e2420) Data frame received for 3\nI0501 17:14:17.965521 2994 log.go:172] (0xc0006b00a0) (3) Data frame handling\nI0501 17:14:17.966306 2994 log.go:172] (0xc0009e2420) Data frame received for 1\nI0501 17:14:17.966375 2994 log.go:172] (0xc0009c86e0) (1) Data frame handling\nI0501 17:14:17.966399 2994 log.go:172] (0xc0009c86e0) (1) Data frame sent\nI0501 17:14:17.966417 2994 log.go:172] (0xc0009e2420) (0xc0009c86e0) Stream removed, broadcasting: 1\nI0501 17:14:17.966446 2994 log.go:172] (0xc0009e2420) Go away received\nI0501 17:14:17.966986 2994 log.go:172] (0xc0009e2420) (0xc0009c86e0) Stream removed, broadcasting: 1\nI0501 17:14:17.967008 2994 log.go:172] (0xc0009e2420) (0xc0006b00a0) Stream removed, broadcasting: 3\nI0501 17:14:17.967029 2994 log.go:172] (0xc0009e2420) (0xc0009c8780) Stream removed, broadcasting: 5\n" May 1 17:14:17.972: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 17:14:17.972: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 17:14:17.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:14:18.179: INFO: stderr: "I0501 17:14:18.112382 3015 log.go:172] (0xc0009f62c0) (0xc0009505a0) Create stream\nI0501 17:14:18.112440 3015 log.go:172] (0xc0009f62c0) (0xc0009505a0) Stream added, broadcasting: 1\nI0501 17:14:18.114433 3015 log.go:172] (0xc0009f62c0) Reply frame received for 1\nI0501 17:14:18.114466 3015 log.go:172] (0xc0009f62c0) (0xc000950640) Create stream\nI0501 17:14:18.114476 3015 log.go:172] (0xc0009f62c0) (0xc000950640) Stream added, broadcasting: 3\nI0501 17:14:18.115285 3015 log.go:172] (0xc0009f62c0) Reply frame received for 3\nI0501 17:14:18.115323 3015 log.go:172] (0xc0009f62c0) (0xc0005d0320) Create stream\nI0501 17:14:18.115342 3015 log.go:172] (0xc0009f62c0) (0xc0005d0320) Stream added, broadcasting: 5\nI0501 17:14:18.116161 3015 log.go:172] (0xc0009f62c0) Reply frame received for 5\nI0501 17:14:18.171869 3015 log.go:172] (0xc0009f62c0) Data frame received for 3\nI0501 17:14:18.171930 3015 log.go:172] (0xc000950640) (3) Data frame handling\nI0501 17:14:18.171953 3015 log.go:172] (0xc000950640) (3) Data frame sent\nI0501 17:14:18.171992 3015 log.go:172] (0xc0009f62c0) Data frame received for 5\nI0501 17:14:18.172010 3015 log.go:172] (0xc0005d0320) (5) Data frame handling\nI0501 17:14:18.172041 3015 log.go:172] (0xc0005d0320) (5) Data frame sent\nI0501 17:14:18.172062 3015 log.go:172] (0xc0009f62c0) Data frame received for 5\nI0501 17:14:18.172077 3015 log.go:172] (0xc0005d0320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0501 17:14:18.172422 3015 log.go:172] (0xc0009f62c0) Data frame received for 3\nI0501 17:14:18.172450 3015 log.go:172] (0xc000950640) (3) Data frame handling\nI0501 17:14:18.175105 3015 log.go:172] (0xc0009f62c0) Data frame received for 1\nI0501 17:14:18.175135 3015 log.go:172] (0xc0009505a0) (1) Data frame handling\nI0501 17:14:18.175166 3015 log.go:172] (0xc0009505a0) (1) Data frame sent\nI0501 17:14:18.175184 3015 log.go:172] (0xc0009f62c0) (0xc0009505a0) Stream removed, broadcasting: 1\nI0501 17:14:18.175209 3015 log.go:172] (0xc0009f62c0) Go away received\nI0501 17:14:18.175600 3015 log.go:172] (0xc0009f62c0) (0xc0009505a0) Stream removed, broadcasting: 1\nI0501 17:14:18.175619 3015 log.go:172] (0xc0009f62c0) (0xc000950640) Stream removed, broadcasting: 3\nI0501 17:14:18.175628 3015 log.go:172] (0xc0009f62c0) (0xc0005d0320) Stream removed, broadcasting: 5\n" May 1 17:14:18.179: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 17:14:18.179: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 17:14:18.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:14:18.392: INFO: stderr: "I0501 17:14:18.316894 3036 log.go:172] (0xc0009bc420) (0xc000a0a6e0) Create stream\nI0501 17:14:18.316954 3036 log.go:172] (0xc0009bc420) (0xc000a0a6e0) Stream added, broadcasting: 1\nI0501 17:14:18.319658 3036 log.go:172] (0xc0009bc420) Reply frame received for 1\nI0501 17:14:18.319715 3036 log.go:172] (0xc0009bc420) (0xc00066c1e0) Create stream\nI0501 17:14:18.319730 3036 log.go:172] (0xc0009bc420) (0xc00066c1e0) Stream added, broadcasting: 3\nI0501 17:14:18.321005 3036 log.go:172] (0xc0009bc420) Reply frame received for 3\nI0501 17:14:18.321048 3036 log.go:172] (0xc0009bc420) (0xc000691a40) Create stream\nI0501 17:14:18.321074 3036 log.go:172] (0xc0009bc420) (0xc000691a40) Stream added, broadcasting: 5\nI0501 17:14:18.322347 3036 log.go:172] (0xc0009bc420) Reply frame received for 5\nI0501 17:14:18.385067 3036 log.go:172] (0xc0009bc420) Data frame received for 5\nI0501 17:14:18.385097 3036 log.go:172] (0xc000691a40) (5) Data frame handling\nI0501 17:14:18.385105 3036 log.go:172] (0xc000691a40) (5) Data frame sent\nI0501 17:14:18.385227 3036 log.go:172] (0xc0009bc420) Data frame received for 5\nI0501 17:14:18.385237 3036 log.go:172] (0xc000691a40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0501 17:14:18.385255 3036 log.go:172] (0xc0009bc420) Data frame received for 3\nI0501 17:14:18.385259 3036 log.go:172] (0xc00066c1e0) (3) Data frame handling\nI0501 17:14:18.385265 3036 log.go:172] (0xc00066c1e0) (3) Data frame sent\nI0501 17:14:18.385270 3036 log.go:172] (0xc0009bc420) Data frame received for 3\nI0501 17:14:18.385273 3036 log.go:172] (0xc00066c1e0) (3) Data frame handling\nI0501 17:14:18.387286 3036 log.go:172] (0xc0009bc420) Data frame received for 1\nI0501 17:14:18.387310 3036 log.go:172] (0xc000a0a6e0) (1) Data frame handling\nI0501 17:14:18.387322 3036 log.go:172] (0xc000a0a6e0) (1) Data frame sent\nI0501 17:14:18.387338 3036 log.go:172] (0xc0009bc420) (0xc000a0a6e0) Stream removed, broadcasting: 1\nI0501 17:14:18.387453 3036 log.go:172] (0xc0009bc420) Go away received\nI0501 17:14:18.387599 3036 log.go:172] (0xc0009bc420) (0xc000a0a6e0) Stream removed, broadcasting: 1\nI0501 17:14:18.387611 3036 log.go:172] (0xc0009bc420) (0xc00066c1e0) Stream removed, broadcasting: 3\nI0501 17:14:18.387616 3036 log.go:172] (0xc0009bc420) (0xc000691a40) Stream removed, broadcasting: 5\n" May 1 17:14:18.392: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 17:14:18.392: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 17:14:18.478: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 1 17:14:18.478: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 1 17:14:18.478: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 1 17:14:18.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 17:14:18.720: INFO: stderr: "I0501 17:14:18.639146 3057 log.go:172] (0xc000998370) (0xc0008e86e0) Create stream\nI0501 17:14:18.639204 3057 log.go:172] (0xc000998370) (0xc0008e86e0) Stream added, broadcasting: 1\nI0501 17:14:18.641777 3057 log.go:172] (0xc000998370) Reply frame received for 1\nI0501 17:14:18.641826 3057 log.go:172] (0xc000998370) (0xc0006b01e0) Create stream\nI0501 17:14:18.641840 3057 log.go:172] (0xc000998370) (0xc0006b01e0) Stream added, broadcasting: 3\nI0501 17:14:18.642769 3057 log.go:172] (0xc000998370) Reply frame received for 3\nI0501 17:14:18.642812 3057 log.go:172] (0xc000998370) (0xc0008e8780) Create stream\nI0501 17:14:18.642826 3057 log.go:172] (0xc000998370) (0xc0008e8780) Stream added, broadcasting: 5\nI0501 17:14:18.643644 3057 log.go:172] (0xc000998370) Reply frame received for 5\nI0501 17:14:18.713738 3057 log.go:172] (0xc000998370) Data frame received for 5\nI0501 17:14:18.713776 3057 log.go:172] (0xc0008e8780) (5) Data frame handling\nI0501 17:14:18.713789 3057 log.go:172] (0xc0008e8780) (5) Data frame sent\nI0501 17:14:18.713798 3057 log.go:172] (0xc000998370) Data frame received for 5\nI0501 17:14:18.713806 3057 log.go:172] (0xc0008e8780) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0501 17:14:18.713832 3057 log.go:172] (0xc000998370) Data frame received for 3\nI0501 17:14:18.713843 3057 log.go:172] (0xc0006b01e0) (3) Data frame handling\nI0501 17:14:18.713859 3057 log.go:172] (0xc0006b01e0) (3) Data frame sent\nI0501 17:14:18.713884 3057 log.go:172] (0xc000998370) Data frame received for 3\nI0501 17:14:18.713893 3057 log.go:172] (0xc0006b01e0) (3) Data frame handling\nI0501 17:14:18.715320 3057 log.go:172] (0xc000998370) Data frame received for 1\nI0501 17:14:18.715350 3057 log.go:172] (0xc0008e86e0) (1) Data frame handling\nI0501 17:14:18.715368 3057 log.go:172] (0xc0008e86e0) (1) Data frame sent\nI0501 17:14:18.715381 3057 log.go:172] (0xc000998370) (0xc0008e86e0) Stream removed, broadcasting: 1\nI0501 17:14:18.715395 3057 log.go:172] (0xc000998370) Go away received\nI0501 17:14:18.715789 3057 log.go:172] (0xc000998370) (0xc0008e86e0) Stream removed, broadcasting: 1\nI0501 17:14:18.715811 3057 log.go:172] (0xc000998370) (0xc0006b01e0) Stream removed, broadcasting: 3\nI0501 17:14:18.715820 3057 log.go:172] (0xc000998370) (0xc0008e8780) Stream removed, broadcasting: 5\n" May 1 17:14:18.721: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 17:14:18.721: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 17:14:18.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 17:14:19.222: INFO: stderr: "I0501 17:14:18.845883 3079 log.go:172] (0xc00010efd0) (0xc0006b2b40) Create stream\nI0501 17:14:18.845945 3079 log.go:172] (0xc00010efd0) (0xc0006b2b40) Stream added, broadcasting: 1\nI0501 17:14:18.847794 3079 log.go:172] (0xc00010efd0) Reply frame received for 1\nI0501 17:14:18.847832 3079 log.go:172] (0xc00010efd0) (0xc0006b2be0) Create stream\nI0501 17:14:18.847842 3079 log.go:172] (0xc00010efd0) (0xc0006b2be0) Stream added, broadcasting: 3\nI0501 17:14:18.848537 3079 log.go:172] (0xc00010efd0) Reply frame received for 3\nI0501 17:14:18.848583 3079 log.go:172] (0xc00010efd0) (0xc0006b2c80) Create stream\nI0501 17:14:18.848592 3079 log.go:172] (0xc00010efd0) (0xc0006b2c80) Stream added, broadcasting: 5\nI0501 17:14:18.849445 3079 log.go:172] (0xc00010efd0) Reply frame received for 5\nI0501 17:14:18.896101 3079 log.go:172] (0xc00010efd0) Data frame received for 5\nI0501 17:14:18.896151 3079 log.go:172] (0xc0006b2c80) (5) Data frame handling\nI0501 17:14:18.896177 3079 log.go:172] (0xc0006b2c80) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0501 17:14:19.215598 3079 log.go:172] (0xc00010efd0) Data frame received for 5\nI0501 17:14:19.215658 3079 log.go:172] (0xc0006b2c80) (5) Data frame handling\nI0501 17:14:19.215691 3079 log.go:172] (0xc00010efd0) Data frame received for 3\nI0501 17:14:19.215709 3079 log.go:172] (0xc0006b2be0) (3) Data frame handling\nI0501 17:14:19.215738 3079 log.go:172] (0xc0006b2be0) (3) Data frame sent\nI0501 17:14:19.215756 3079 log.go:172] (0xc00010efd0) Data frame received for 3\nI0501 17:14:19.215770 3079 log.go:172] (0xc0006b2be0) (3) Data frame handling\nI0501 17:14:19.217770 3079 log.go:172] (0xc00010efd0) Data frame received for 1\nI0501 17:14:19.217797 3079 log.go:172] (0xc0006b2b40) (1) Data frame handling\nI0501 17:14:19.217808 3079 log.go:172] (0xc0006b2b40) (1) Data frame sent\nI0501 17:14:19.217821 3079 log.go:172] (0xc00010efd0) (0xc0006b2b40) Stream removed, broadcasting: 1\nI0501 17:14:19.217886 3079 log.go:172] (0xc00010efd0) Go away received\nI0501 17:14:19.218041 3079 log.go:172] (0xc00010efd0) (0xc0006b2b40) Stream removed, broadcasting: 1\nI0501 17:14:19.218053 3079 log.go:172] (0xc00010efd0) (0xc0006b2be0) Stream removed, broadcasting: 3\nI0501 17:14:19.218060 3079 log.go:172] (0xc00010efd0) (0xc0006b2c80) Stream removed, broadcasting: 5\n" May 1 17:14:19.222: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 17:14:19.222: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 17:14:19.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 17:14:19.731: INFO: stderr: "I0501 17:14:19.408901 3099 log.go:172] (0xc000116840) (0xc0005fe820) Create stream\nI0501 17:14:19.408954 3099 log.go:172] (0xc000116840) (0xc0005fe820) Stream added, broadcasting: 1\nI0501 17:14:19.413464 3099 log.go:172] (0xc000116840) Reply frame received for 1\nI0501 17:14:19.413518 3099 log.go:172] (0xc000116840) (0xc0005fe000) Create stream\nI0501 17:14:19.413533 3099 log.go:172] (0xc000116840) (0xc0005fe000) Stream added, broadcasting: 3\nI0501 17:14:19.414547 3099 log.go:172] (0xc000116840) Reply frame received for 3\nI0501 17:14:19.414599 3099 log.go:172] (0xc000116840) (0xc0005bc500) Create stream\nI0501 17:14:19.414623 3099 log.go:172] (0xc000116840) (0xc0005bc500) Stream added, broadcasting: 5\nI0501 17:14:19.415358 3099 log.go:172] (0xc000116840) Reply frame received for 5\nI0501 17:14:19.474236 3099 log.go:172] (0xc000116840) Data frame received for 5\nI0501 17:14:19.474265 3099 log.go:172] (0xc0005bc500) (5) Data frame handling\nI0501 17:14:19.474279 3099 log.go:172] (0xc0005bc500) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0501 17:14:19.724575 3099 log.go:172] (0xc000116840) Data frame received for 5\nI0501 17:14:19.724632 3099 log.go:172] (0xc0005bc500) (5) Data frame handling\nI0501 17:14:19.724668 3099 log.go:172] (0xc000116840) Data frame received for 3\nI0501 17:14:19.724687 3099 log.go:172] (0xc0005fe000) (3) Data frame handling\nI0501 17:14:19.724714 3099 log.go:172] (0xc0005fe000) (3) Data frame sent\nI0501 17:14:19.724735 3099 log.go:172] (0xc000116840) Data frame received for 3\nI0501 17:14:19.724753 3099 log.go:172] (0xc0005fe000) (3) Data frame handling\nI0501 17:14:19.726695 3099 log.go:172] (0xc000116840) Data frame received for 1\nI0501 17:14:19.726709 3099 log.go:172] (0xc0005fe820) (1) Data frame handling\nI0501 17:14:19.726727 3099 log.go:172] (0xc0005fe820) (1) Data frame sent\nI0501 17:14:19.727007 3099 log.go:172] (0xc000116840) (0xc0005fe820) Stream removed, broadcasting: 1\nI0501 17:14:19.727161 3099 log.go:172] (0xc000116840) Go away received\nI0501 17:14:19.727533 3099 log.go:172] (0xc000116840) (0xc0005fe820) Stream removed, broadcasting: 1\nI0501 17:14:19.727561 3099 log.go:172] (0xc000116840) (0xc0005fe000) Stream removed, broadcasting: 3\nI0501 17:14:19.727578 3099 log.go:172] (0xc000116840) (0xc0005bc500) Stream removed, broadcasting: 5\n" May 1 17:14:19.732: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 17:14:19.732: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 17:14:19.732: INFO: Waiting for statefulset status.replicas updated to 0 May 1 17:14:19.873: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 1 17:14:29.882: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 1 17:14:29.882: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 1 17:14:29.882: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 1 17:14:30.227: INFO: POD NODE PHASE GRACE CONDITIONS May 1 17:14:30.227: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC }] May 1 17:14:30.227: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:30.227: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:30.227: INFO: May 1 17:14:30.227: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 17:14:31.232: INFO: POD NODE PHASE GRACE CONDITIONS May 1 17:14:31.232: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC }] May 1 17:14:31.232: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:31.232: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:31.232: INFO: May 1 17:14:31.233: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 17:14:32.237: INFO: POD NODE PHASE GRACE CONDITIONS May 1 17:14:32.237: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC }] May 1 17:14:32.237: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:32.237: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:32.237: INFO: May 1 17:14:32.237: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 17:14:33.275: INFO: POD NODE PHASE GRACE CONDITIONS May 1 17:14:33.275: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC }] May 1 17:14:33.275: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:33.275: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:33.275: INFO: May 1 17:14:33.275: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 17:14:34.279: INFO: POD NODE PHASE GRACE CONDITIONS May 1 17:14:34.279: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC }] May 1 17:14:34.279: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:34.279: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:34.279: INFO: May 1 17:14:34.279: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 17:14:35.284: INFO: POD NODE PHASE GRACE CONDITIONS May 1 17:14:35.284: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC }] May 1 17:14:35.284: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:35.284: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:35.285: INFO: May 1 17:14:35.285: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 17:14:36.289: INFO: POD NODE PHASE GRACE CONDITIONS May 1 17:14:36.289: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC }] May 1 17:14:36.289: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:36.289: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:36.289: INFO: May 1 17:14:36.289: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 17:14:37.294: INFO: POD NODE PHASE GRACE CONDITIONS May 1 17:14:37.294: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC }] May 1 17:14:37.295: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:37.295: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:37.295: INFO: May 1 17:14:37.295: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 17:14:38.300: INFO: POD NODE PHASE GRACE CONDITIONS May 1 17:14:38.300: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC }] May 1 17:14:38.300: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:38.300: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:38.300: INFO: May 1 17:14:38.300: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 17:14:39.306: INFO: POD NODE PHASE GRACE CONDITIONS May 1 17:14:39.306: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:13:46 +0000 UTC }] May 1 17:14:39.306: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:39.306: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 17:14:06 +0000 UTC }] May 1 17:14:39.306: INFO: May 1 17:14:39.306: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1933 May 1 17:14:40.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:14:40.458: INFO: rc: 1 May 1 17:14:40.458: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc003035710 exit status 1 true [0xc0029fe4e0 0xc0029fe508 0xc0029fe548] [0xc0029fe4e0 0xc0029fe508 0xc0029fe548] [0xc0029fe500 0xc0029fe530] [0xba6de0 0xba6de0] 0xc0026d25a0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 May 1 17:14:50.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:14:50.559: INFO: rc: 1 May 1 17:14:50.559: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003035800 exit status 1 true [0xc0029fe550 0xc0029fe580 0xc0029fe5a8] [0xc0029fe550 0xc0029fe580 0xc0029fe5a8] [0xc0029fe560 0xc0029fe5a0] [0xba6de0 0xba6de0] 0xc0026d2c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:15:00.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:15:00.660: INFO: rc: 1 May 1 17:15:00.660: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003656090 exit status 1 true [0xc0006ec160 0xc0006ec750 0xc0006ec878] [0xc0006ec160 0xc0006ec750 0xc0006ec878] [0xc0006ec630 0xc0006ec858] [0xba6de0 0xba6de0] 0xc002aac720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:15:10.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:15:10.763: INFO: rc: 1 May 1 17:15:10.763: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003656150 exit status 1 true [0xc0006ec9b0 0xc0006ecdc8 0xc0006ed0c8] [0xc0006ec9b0 0xc0006ecdc8 0xc0006ed0c8] [0xc0006ecc10 0xc0006ecef0] [0xba6de0 0xba6de0] 0xc002aacd20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:15:20.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:15:20.873: INFO: rc: 1 May 1 17:15:20.873: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002bdc0f0 exit status 1 true [0xc00164a050 0xc00164a3a8 0xc00164a4c8] [0xc00164a050 0xc00164a3a8 0xc00164a4c8] [0xc00164a340 0xc00164a458] [0xba6de0 0xba6de0] 0xc0026026c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:15:30.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:15:30.978: INFO: rc: 1 May 1 17:15:30.978: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002e640c0 exit status 1 true [0xc00054bbd0 0xc00054bd18 0xc00054bdd0] [0xc00054bbd0 0xc00054bd18 0xc00054bdd0] [0xc00054bcd8 0xc00054bd90] [0xba6de0 0xba6de0] 0xc00245c240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:15:40.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:15:41.076: INFO: rc: 1 May 1 17:15:41.076: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002e64180 exit status 1 true [0xc00054beb0 0xc00054bf88 0xc0000104c8] [0xc00054beb0 0xc00054bf88 0xc0000104c8] [0xc00054bf78 0xc00054bfd8] [0xba6de0 0xba6de0] 0xc00245c540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:15:51.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:15:51.180: INFO: rc: 1 May 1 17:15:51.180: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00298c090 exit status 1 true [0xc00035e0c0 0xc00035f4e0 0xc00035f6d0] [0xc00035e0c0 0xc00035f4e0 0xc00035f6d0] [0xc00035f4a8 0xc00035f670] [0xba6de0 0xba6de0] 0xc0028ac300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:16:01.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:16:01.276: INFO: rc: 1 May 1 17:16:01.276: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002e64270 exit status 1 true [0xc0000105a8 0xc0009d8050 0xc0009d8250] [0xc0000105a8 0xc0009d8050 0xc0009d8250] [0xc0009d8040 0xc0009d81e0] [0xba6de0 0xba6de0] 0xc00245de60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:16:11.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:16:11.379: INFO: rc: 1 May 1 17:16:11.379: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002bdc1e0 exit status 1 true [0xc00164a4d8 0xc00164a728 0xc00164a860] [0xc00164a4d8 0xc00164a728 0xc00164a860] [0xc00164a6e0 0xc00164a848] [0xba6de0 0xba6de0] 0xc002602b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:16:21.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:16:21.476: INFO: rc: 1 May 1 17:16:21.476: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002e64330 exit status 1 true [0xc0009d82b8 0xc0009d8340 0xc0009d8588] [0xc0009d82b8 0xc0009d8340 0xc0009d8588] [0xc0009d8320 0xc0009d8508] [0xba6de0 0xba6de0] 0xc0027fa240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:16:31.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:16:31.590: INFO: rc: 1 May 1 17:16:31.590: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00298c180 exit status 1 true [0xc00035f7b0 0xc00035f978 0xc00035fb08] [0xc00035f7b0 0xc00035f978 0xc00035fb08] [0xc00035f888 0xc00035fa98] [0xba6de0 0xba6de0] 0xc0028ac720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:16:41.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:16:41.674: INFO: rc: 1 May 1 17:16:41.675: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002bdc2a0 exit status 1 true [0xc00164a890 0xc00164a8f8 0xc00164a990] [0xc00164a890 0xc00164a8f8 0xc00164a990] [0xc00164a8c8 0xc00164a988] [0xba6de0 0xba6de0] 0xc002603020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:16:51.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:16:51.777: INFO: rc: 1 May 1 17:16:51.777: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002e64450 exit status 1 true [0xc0009d85c8 0xc0009d8d28 0xc0009d8df0] [0xc0009d85c8 0xc0009d8d28 0xc0009d8df0] [0xc0009d8cb8 0xc0009d8dd0] [0xba6de0 0xba6de0] 0xc0027fa660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:17:01.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:17:01.885: INFO: rc: 1 May 1 17:17:01.885: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0036560c0 exit status 1 true [0xc0000105a8 0xc00054bcd8 0xc00054bd90] [0xc0000105a8 0xc00054bcd8 0xc00054bd90] [0xc00054bc78 0xc00054bd78] [0xba6de0 0xba6de0] 0xc00245c240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:17:11.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:17:11.990: INFO: rc: 1 May 1 17:17:11.990: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0036561e0 exit status 1 true [0xc00054bdd0 0xc00054bf78 0xc00054bfd8] [0xc00054bdd0 0xc00054bf78 0xc00054bfd8] [0xc00054bf28 0xc00054bfb8] [0xba6de0 0xba6de0] 0xc00245c540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:17:21.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:17:22.086: INFO: rc: 1 May 1 17:17:22.086: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0036562d0 exit status 1 true [0xc0006ec0b0 0xc0006ec630 0xc0006ec858] [0xc0006ec0b0 0xc0006ec630 0xc0006ec858] [0xc0006ec260 0xc0006ec800] [0xba6de0 0xba6de0] 0xc00245de60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:17:32.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:17:32.173: INFO: rc: 1 May 1 17:17:32.173: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002bdc0c0 exit status 1 true [0xc0009d8008 0xc0009d8198 0xc0009d82b8] [0xc0009d8008 0xc0009d8198 0xc0009d82b8] [0xc0009d8050 0xc0009d8250] [0xba6de0 0xba6de0] 0xc002aac720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:17:42.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:17:42.314: INFO: rc: 1 May 1 17:17:42.314: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00298c0c0 exit status 1 true [0xc00164a050 0xc00164a3a8 0xc00164a4c8] [0xc00164a050 0xc00164a3a8 0xc00164a4c8] [0xc00164a340 0xc00164a458] [0xba6de0 0xba6de0] 0xc0027fa300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:17:52.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:17:52.422: INFO: rc: 1 May 1 17:17:52.422: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00298c210 exit status 1 true [0xc00164a4d8 0xc00164a728 0xc00164a860] [0xc00164a4d8 0xc00164a728 0xc00164a860] [0xc00164a6e0 0xc00164a848] [0xba6de0 0xba6de0] 0xc0027fa780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:18:02.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:18:02.525: INFO: rc: 1 May 1 17:18:02.525: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00298c2d0 exit status 1 true [0xc00164a890 0xc00164a8f8 0xc00164a990] [0xc00164a890 0xc00164a8f8 0xc00164a990] [0xc00164a8c8 0xc00164a988] [0xba6de0 0xba6de0] 0xc0027fac60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:18:12.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:18:12.621: INFO: rc: 1 May 1 17:18:12.621: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00298c3c0 exit status 1 true [0xc00164a9a8 0xc00164aaa8 0xc00164ab60] [0xc00164a9a8 0xc00164aaa8 0xc00164ab60] [0xc00164aa78 0xc00164ab10] [0xba6de0 0xba6de0] 0xc0027fb080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:18:22.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:18:22.715: INFO: rc: 1 May 1 17:18:22.715: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002bdc210 exit status 1 true [0xc0009d82f8 0xc0009d83f8 0xc0009d85c8] [0xc0009d82f8 0xc0009d83f8 0xc0009d85c8] [0xc0009d8340 0xc0009d8588] [0xba6de0 0xba6de0] 0xc002aacd20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:18:32.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:18:32.804: INFO: rc: 1 May 1 17:18:32.804: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002bdc330 exit status 1 true [0xc0009d8c10 0xc0009d8d40 0xc0009d8e18] [0xc0009d8c10 0xc0009d8d40 0xc0009d8e18] [0xc0009d8d28 0xc0009d8df0] [0xba6de0 0xba6de0] 0xc002aad140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:18:42.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:18:42.901: INFO: rc: 1 May 1 17:18:42.901: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0036563f0 exit status 1 true [0xc0006ec878 0xc0006ecc10 0xc0006ecef0] [0xc0006ec878 0xc0006ecc10 0xc0006ecef0] [0xc0006ecad8 0xc0006ece10] [0xba6de0 0xba6de0] 0xc0026025a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:18:52.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:18:53.004: INFO: rc: 1 May 1 17:18:53.004: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002bdc420 exit status 1 true [0xc0009d8e38 0xc0009d8ec0 0xc0009d8f58] [0xc0009d8e38 0xc0009d8ec0 0xc0009d8f58] [0xc0009d8e98 0xc0009d8ef8] [0xba6de0 0xba6de0] 0xc002aad680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:19:03.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:19:03.100: INFO: rc: 1 May 1 17:19:03.100: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00298c090 exit status 1 true [0xc00054bc78 0xc00054bd78 0xc00054beb0] [0xc00054bc78 0xc00054bd78 0xc00054beb0] [0xc00054bd18 0xc00054bdd0] [0xba6de0 0xba6de0] 0xc00245c240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:19:13.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:19:13.190: INFO: rc: 1 May 1 17:19:13.190: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002bdc0f0 exit status 1 true [0xc0000104c8 0xc00164a2a8 0xc00164a3d8] [0xc0000104c8 0xc00164a2a8 0xc00164a3d8] [0xc00164a050 0xc00164a3a8] [0xba6de0 0xba6de0] 0xc0027fa300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:19:23.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:19:23.286: INFO: rc: 1 May 1 17:19:23.286: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0036560f0 exit status 1 true [0xc0009d8008 0xc0009d8198 0xc0009d82b8] [0xc0009d8008 0xc0009d8198 0xc0009d82b8] [0xc0009d8050 0xc0009d8250] [0xba6de0 0xba6de0] 0xc002aac720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:19:33.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:19:38.796: INFO: rc: 1 May 1 17:19:38.796: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0036561b0 exit status 1 true [0xc0009d82f8 0xc0009d83f8 0xc0009d85c8] [0xc0009d82f8 0xc0009d83f8 0xc0009d85c8] [0xc0009d8340 0xc0009d8588] [0xba6de0 0xba6de0] 0xc002aacd20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 1 17:19:48.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1933 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 17:19:48.893: INFO: rc: 1 May 1 17:19:48.894: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: May 1 17:19:48.894: INFO: Scaling statefulset ss to 0 May 1 17:19:48.903: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 1 17:19:48.906: INFO: Deleting all statefulset in ns statefulset-1933 May 1 17:19:48.908: INFO: Scaling statefulset ss to 0 May 1 17:19:48.916: INFO: Waiting for statefulset status.replicas updated to 0 May 1 17:19:48.918: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:19:48.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1933" for this suite. May 1 17:19:54.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:19:55.057: INFO: namespace statefulset-1933 deletion completed in 6.103823283s • [SLOW TEST:369.109 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:19:55.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-f0f10ef6-3a73-4c69-9b69-442f9e8cd533 STEP: Creating a pod to test consume secrets May 1 17:19:55.650: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ce846c62-37b4-43d0-ac54-4621f4b461e9" in namespace "projected-6765" to be "success or failure" May 1 17:19:55.779: INFO: Pod "pod-projected-secrets-ce846c62-37b4-43d0-ac54-4621f4b461e9": Phase="Pending", Reason="", readiness=false. Elapsed: 128.830155ms May 1 17:19:57.783: INFO: Pod "pod-projected-secrets-ce846c62-37b4-43d0-ac54-4621f4b461e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132652893s May 1 17:19:59.788: INFO: Pod "pod-projected-secrets-ce846c62-37b4-43d0-ac54-4621f4b461e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.137856257s STEP: Saw pod success May 1 17:19:59.788: INFO: Pod "pod-projected-secrets-ce846c62-37b4-43d0-ac54-4621f4b461e9" satisfied condition "success or failure" May 1 17:19:59.792: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-ce846c62-37b4-43d0-ac54-4621f4b461e9 container secret-volume-test: STEP: delete the pod May 1 17:20:00.202: INFO: Waiting for pod pod-projected-secrets-ce846c62-37b4-43d0-ac54-4621f4b461e9 to disappear May 1 17:20:00.254: INFO: Pod pod-projected-secrets-ce846c62-37b4-43d0-ac54-4621f4b461e9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:20:00.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6765" for this suite. May 1 17:20:06.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:20:06.432: INFO: namespace projected-6765 deletion completed in 6.174270807s • [SLOW TEST:11.375 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:20:06.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 1 17:20:11.898: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:20:11.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5974" for this suite. May 1 17:20:18.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:20:18.088: INFO: namespace container-runtime-5974 deletion completed in 6.089225293s • [SLOW TEST:11.656 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:20:18.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 1 17:20:23.629: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:20:23.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5605" for this suite. May 1 17:20:29.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:20:30.014: INFO: namespace container-runtime-5605 deletion completed in 6.315611264s • [SLOW TEST:11.926 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:20:30.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 1 17:20:36.607: INFO: Successfully updated pod "labelsupdate5c5a78a6-1d49-4002-b5d2-0fbdb4a4131c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:20:38.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1065" for this suite. May 1 17:21:00.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:21:00.993: INFO: namespace projected-1065 deletion completed in 22.240869317s • [SLOW TEST:30.979 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:21:00.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition May 1 17:21:01.059: INFO: Waiting up to 5m0s for pod "var-expansion-f4287b40-eb71-4bf4-b1e9-d562f49a93ef" in namespace "var-expansion-3211" to be "success or failure" May 1 17:21:01.074: INFO: Pod "var-expansion-f4287b40-eb71-4bf4-b1e9-d562f49a93ef": Phase="Pending", Reason="", readiness=false. Elapsed: 15.004199ms May 1 17:21:03.079: INFO: Pod "var-expansion-f4287b40-eb71-4bf4-b1e9-d562f49a93ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019478s May 1 17:21:05.083: INFO: Pod "var-expansion-f4287b40-eb71-4bf4-b1e9-d562f49a93ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023880356s STEP: Saw pod success May 1 17:21:05.083: INFO: Pod "var-expansion-f4287b40-eb71-4bf4-b1e9-d562f49a93ef" satisfied condition "success or failure" May 1 17:21:05.086: INFO: Trying to get logs from node iruya-worker pod var-expansion-f4287b40-eb71-4bf4-b1e9-d562f49a93ef container dapi-container: STEP: delete the pod May 1 17:21:05.146: INFO: Waiting for pod var-expansion-f4287b40-eb71-4bf4-b1e9-d562f49a93ef to disappear May 1 17:21:05.202: INFO: Pod var-expansion-f4287b40-eb71-4bf4-b1e9-d562f49a93ef no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:21:05.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3211" for this suite. May 1 17:21:11.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:21:11.331: INFO: namespace var-expansion-3211 deletion completed in 6.126037026s • [SLOW TEST:10.337 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:21:11.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:21:16.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2432" for this suite. May 1 17:21:23.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:21:23.073: INFO: namespace watch-2432 deletion completed in 6.101070229s • [SLOW TEST:11.740 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:21:23.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 1 17:21:28.011: INFO: Successfully updated pod "labelsupdate093ccea9-d937-404b-a293-7d205899779c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:21:32.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1632" for this suite. May 1 17:21:54.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:21:54.196: INFO: namespace downward-api-1632 deletion completed in 22.114611206s • [SLOW TEST:31.123 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 1 17:21:54.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults May 1 17:21:54.255: INFO: Waiting up to 5m0s for pod "client-containers-25ded96e-aa3a-4ef3-8eee-937067423130" in namespace "containers-7590" to be "success or failure" May 1 17:21:54.292: INFO: Pod "client-containers-25ded96e-aa3a-4ef3-8eee-937067423130": Phase="Pending", Reason="", readiness=false. Elapsed: 36.51206ms May 1 17:21:56.296: INFO: Pod "client-containers-25ded96e-aa3a-4ef3-8eee-937067423130": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040307192s May 1 17:21:58.300: INFO: Pod "client-containers-25ded96e-aa3a-4ef3-8eee-937067423130": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044317726s STEP: Saw pod success May 1 17:21:58.300: INFO: Pod "client-containers-25ded96e-aa3a-4ef3-8eee-937067423130" satisfied condition "success or failure" May 1 17:21:58.303: INFO: Trying to get logs from node iruya-worker pod client-containers-25ded96e-aa3a-4ef3-8eee-937067423130 container test-container: STEP: delete the pod May 1 17:21:58.486: INFO: Waiting for pod client-containers-25ded96e-aa3a-4ef3-8eee-937067423130 to disappear May 1 17:21:58.554: INFO: Pod client-containers-25ded96e-aa3a-4ef3-8eee-937067423130 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 1 17:21:58.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7590" for this suite. May 1 17:22:04.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 17:22:04.804: INFO: namespace containers-7590 deletion completed in 6.247042667s • [SLOW TEST:10.608 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSMay 1 17:22:04.805: INFO: Running AfterSuite actions on all nodes May 1 17:22:04.805: INFO: Running AfterSuite actions on node 1 May 1 17:22:04.805: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6947.599 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS