I0420 15:43:48.646872 6 e2e.go:243] Starting e2e run "983e8289-b5b6-41bb-b833-66f5e3504223" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1618933427 - Will randomize all specs Will run 215 of 4413 specs Apr 20 15:43:48.850: INFO: >>> kubeConfig: /root/.kube/config Apr 20 15:43:48.851: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 20 15:43:48.873: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 20 15:43:48.896: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 20 15:43:48.896: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 20 15:43:48.896: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 20 15:43:48.903: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 20 15:43:48.903: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 20 15:43:48.903: INFO: e2e test version: v1.15.12 Apr 20 15:43:48.904: INFO: kube-apiserver version: v1.15.12 SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:43:48.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets Apr 20 15:43:48.991: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 20 15:43:49.008: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 20 15:43:49.015: INFO: Number of nodes with available pods: 0 Apr 20 15:43:49.015: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 20 15:43:49.079: INFO: Number of nodes with available pods: 0 Apr 20 15:43:49.079: INFO: Node iruya-worker is running more than one daemon pod Apr 20 15:43:50.084: INFO: Number of nodes with available pods: 0 Apr 20 15:43:50.084: INFO: Node iruya-worker is running more than one daemon pod Apr 20 15:43:51.146: INFO: Number of nodes with available pods: 0 Apr 20 15:43:51.146: INFO: Node iruya-worker is running more than one daemon pod Apr 20 15:43:52.083: INFO: Number of nodes with available pods: 0 Apr 20 15:43:52.083: INFO: Node iruya-worker is running more than one daemon pod Apr 20 15:43:53.084: INFO: Number of nodes with available pods: 1 Apr 20 15:43:53.084: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 20 15:43:53.115: INFO: Number of nodes with available pods: 1 Apr 20 15:43:53.115: INFO: Number of running nodes: 0, number of available pods: 1 Apr 20 15:43:54.119: INFO: Number of nodes with available pods: 0 Apr 20 15:43:54.119: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 20 15:43:54.130: INFO: Number of nodes with available pods: 0 Apr 20 15:43:54.130: INFO: Node iruya-worker is running more than one daemon pod Apr 20 15:43:55.231: INFO: Number of nodes with available pods: 0 Apr 20 15:43:55.231: INFO: Node iruya-worker is running more than one daemon pod Apr 20 15:43:56.134: INFO: Number of nodes with available pods: 0 Apr 20 15:43:56.134: INFO: Node iruya-worker is running more than one daemon pod Apr 20 15:43:57.134: INFO: Number of nodes with available pods: 0 Apr 20 15:43:57.134: INFO: Node iruya-worker is running more than one daemon pod Apr 20 15:43:58.134: INFO: Number of nodes with available pods: 0 Apr 20 15:43:58.134: INFO: Node iruya-worker is running more than one daemon pod Apr 20 15:43:59.134: INFO: Number of nodes with available pods: 0 Apr 20 15:43:59.134: INFO: Node iruya-worker is running more than one daemon pod Apr 20 15:44:00.134: INFO: Number of nodes with available pods: 0 Apr 20 15:44:00.134: INFO: Node iruya-worker is running more than one daemon pod Apr 20 15:44:01.135: INFO: Number of nodes with available pods: 0 Apr 20 15:44:01.135: INFO: Node iruya-worker is running more than one daemon pod Apr 20 15:44:02.134: INFO: Number of nodes with available pods: 1 Apr 20 15:44:02.134: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4589, will wait for the garbage collector to delete the pods Apr 20 15:44:02.201: INFO: Deleting DaemonSet.extensions daemon-set took: 6.654185ms Apr 20 15:44:04.301: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.100210965s Apr 20 15:44:09.481: INFO: Number of nodes with available pods: 0 Apr 20 15:44:09.481: INFO: Number of running nodes: 0, number of available pods: 0 Apr 20 15:44:09.486: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4589/daemonsets","resourceVersion":"1286903"},"items":null} Apr 20 15:44:09.489: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4589/pods","resourceVersion":"1286903"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:44:09.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4589" for this suite. Apr 20 15:44:15.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:44:15.920: INFO: namespace daemonsets-4589 deletion completed in 6.379458336s • [SLOW TEST:27.015 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:44:15.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 20 15:44:16.086: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 20 15:44:21.090: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 20 15:44:21.090: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 20 15:44:21.130: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-8471,SelfLink:/apis/apps/v1/namespaces/deployment-8471/deployments/test-cleanup-deployment,UID:ab7d5b86-a305-420c-a23f-43ee61d62c22,ResourceVersion:1286961,Generation:1,CreationTimestamp:2021-04-20 15:44:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Apr 20 15:44:21.166: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-8471,SelfLink:/apis/apps/v1/namespaces/deployment-8471/replicasets/test-cleanup-deployment-55bbcbc84c,UID:eb02705c-5ef3-451a-ae41-dec97944bd92,ResourceVersion:1286963,Generation:1,CreationTimestamp:2021-04-20 15:44:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment ab7d5b86-a305-420c-a23f-43ee61d62c22 0xc002f0bac7 0xc002f0bac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 20 15:44:21.166: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 20 15:44:21.166: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-8471,SelfLink:/apis/apps/v1/namespaces/deployment-8471/replicasets/test-cleanup-controller,UID:ba2b5c83-a3b8-43ef-a839-1c9a791923ba,ResourceVersion:1286962,Generation:1,CreationTimestamp:2021-04-20 15:44:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment ab7d5b86-a305-420c-a23f-43ee61d62c22 0xc002f0b9f7 0xc002f0b9f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 20 15:44:21.229: INFO: Pod "test-cleanup-controller-b6mvf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-b6mvf,GenerateName:test-cleanup-controller-,Namespace:deployment-8471,SelfLink:/api/v1/namespaces/deployment-8471/pods/test-cleanup-controller-b6mvf,UID:78653e71-1e80-4cec-b9d2-66ecc9120954,ResourceVersion:1286959,Generation:0,CreationTimestamp:2021-04-20 15:44:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller ba2b5c83-a3b8-43ef-a839-1c9a791923ba 0xc002cc1be7 0xc002cc1be8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-594vb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-594vb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-594vb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cc1c60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cc1c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 15:44:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 15:44:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 15:44:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 15:44:16 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.211,StartTime:2021-04-20 15:44:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-04-20 15:44:19 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://df95219491af6e6c0fcad269bc95e5605c5d7e44a14f0d209b38b10b16dc1bb1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 15:44:21.229: INFO: Pod "test-cleanup-deployment-55bbcbc84c-45flh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-45flh,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-8471,SelfLink:/api/v1/namespaces/deployment-8471/pods/test-cleanup-deployment-55bbcbc84c-45flh,UID:14d68a9b-6255-4e17-a086-49c8ac305418,ResourceVersion:1286969,Generation:0,CreationTimestamp:2021-04-20 15:44:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c eb02705c-5ef3-451a-ae41-dec97944bd92 0xc002cc1d67 0xc002cc1d68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-594vb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-594vb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-594vb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cc1de0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cc1e00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 15:44:21 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:44:21.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8471" for this suite. Apr 20 15:44:27.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:44:27.487: INFO: namespace deployment-8471 deletion completed in 6.224015941s • [SLOW TEST:11.567 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:44:27.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 20 15:44:27.700: INFO: Waiting up to 5m0s for pod "pod-941e1c37-93f7-4700-aaaa-859a89e4bde2" in namespace "emptydir-830" to be "success or failure" Apr 20 15:44:27.718: INFO: Pod "pod-941e1c37-93f7-4700-aaaa-859a89e4bde2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.595458ms Apr 20 15:44:29.878: INFO: Pod "pod-941e1c37-93f7-4700-aaaa-859a89e4bde2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177543881s Apr 20 15:44:31.884: INFO: Pod "pod-941e1c37-93f7-4700-aaaa-859a89e4bde2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183496837s Apr 20 15:44:33.888: INFO: Pod "pod-941e1c37-93f7-4700-aaaa-859a89e4bde2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.187445091s STEP: Saw pod success Apr 20 15:44:33.888: INFO: Pod "pod-941e1c37-93f7-4700-aaaa-859a89e4bde2" satisfied condition "success or failure" Apr 20 15:44:33.891: INFO: Trying to get logs from node iruya-worker2 pod pod-941e1c37-93f7-4700-aaaa-859a89e4bde2 container test-container: STEP: delete the pod Apr 20 15:44:33.956: INFO: Waiting for pod pod-941e1c37-93f7-4700-aaaa-859a89e4bde2 to disappear Apr 20 15:44:33.980: INFO: Pod pod-941e1c37-93f7-4700-aaaa-859a89e4bde2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:44:33.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-830" for this suite. Apr 20 15:44:40.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:44:40.122: INFO: namespace emptydir-830 deletion completed in 6.118280537s • [SLOW TEST:12.635 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:44:40.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:44:44.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-331" for this suite. Apr 20 15:45:36.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:45:36.423: INFO: namespace kubelet-test-331 deletion completed in 52.182819498s • [SLOW TEST:56.301 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:45:36.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 20 15:45:36.511: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7489,SelfLink:/api/v1/namespaces/watch-7489/configmaps/e2e-watch-test-resource-version,UID:99ec1bf8-4f93-46ac-bc67-5cf7816d663e,ResourceVersion:1287206,Generation:0,CreationTimestamp:2021-04-20 15:45:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 20 15:45:36.511: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7489,SelfLink:/api/v1/namespaces/watch-7489/configmaps/e2e-watch-test-resource-version,UID:99ec1bf8-4f93-46ac-bc67-5cf7816d663e,ResourceVersion:1287207,Generation:0,CreationTimestamp:2021-04-20 15:45:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:45:36.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7489" for this suite. Apr 20 15:45:42.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:45:42.697: INFO: namespace watch-7489 deletion completed in 6.159114027s • [SLOW TEST:6.273 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:45:42.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 20 15:45:42.778: INFO: Waiting up to 5m0s for pod "pod-4a8e21de-6c16-4098-9ef5-64a80ea35c87" in namespace "emptydir-844" to be "success or failure" Apr 20 15:45:42.789: INFO: Pod "pod-4a8e21de-6c16-4098-9ef5-64a80ea35c87": Phase="Pending", Reason="", readiness=false. Elapsed: 10.889409ms Apr 20 15:45:44.793: INFO: Pod "pod-4a8e21de-6c16-4098-9ef5-64a80ea35c87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014858478s Apr 20 15:45:46.797: INFO: Pod "pod-4a8e21de-6c16-4098-9ef5-64a80ea35c87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018943684s STEP: Saw pod success Apr 20 15:45:46.797: INFO: Pod "pod-4a8e21de-6c16-4098-9ef5-64a80ea35c87" satisfied condition "success or failure" Apr 20 15:45:46.800: INFO: Trying to get logs from node iruya-worker pod pod-4a8e21de-6c16-4098-9ef5-64a80ea35c87 container test-container: STEP: delete the pod Apr 20 15:45:46.839: INFO: Waiting for pod pod-4a8e21de-6c16-4098-9ef5-64a80ea35c87 to disappear Apr 20 15:45:46.860: INFO: Pod pod-4a8e21de-6c16-4098-9ef5-64a80ea35c87 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:45:46.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-844" for this suite. Apr 20 15:45:52.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:45:52.978: INFO: namespace emptydir-844 deletion completed in 6.114055865s • [SLOW TEST:10.281 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:45:52.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 20 15:45:53.086: INFO: Waiting up to 5m0s for pod "pod-25fe76ac-948a-4ab3-a423-24bee65a0958" in namespace "emptydir-1379" to be "success or failure" Apr 20 15:45:53.089: INFO: Pod "pod-25fe76ac-948a-4ab3-a423-24bee65a0958": Phase="Pending", Reason="", readiness=false. Elapsed: 2.486037ms Apr 20 15:45:55.093: INFO: Pod "pod-25fe76ac-948a-4ab3-a423-24bee65a0958": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006731263s Apr 20 15:45:57.097: INFO: Pod "pod-25fe76ac-948a-4ab3-a423-24bee65a0958": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010544895s STEP: Saw pod success Apr 20 15:45:57.097: INFO: Pod "pod-25fe76ac-948a-4ab3-a423-24bee65a0958" satisfied condition "success or failure" Apr 20 15:45:57.099: INFO: Trying to get logs from node iruya-worker pod pod-25fe76ac-948a-4ab3-a423-24bee65a0958 container test-container: STEP: delete the pod Apr 20 15:45:57.115: INFO: Waiting for pod pod-25fe76ac-948a-4ab3-a423-24bee65a0958 to disappear Apr 20 15:45:57.151: INFO: Pod pod-25fe76ac-948a-4ab3-a423-24bee65a0958 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:45:57.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1379" for this suite. Apr 20 15:46:03.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:46:03.290: INFO: namespace emptydir-1379 deletion completed in 6.135672231s • [SLOW TEST:10.312 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:46:03.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 20 15:46:03.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 20 15:46:03.498: INFO: stderr: "" Apr 20 15:46:03.498: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T05:17:59Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2021-01-22T21:57:01Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:46:03.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5907" for this suite. Apr 20 15:46:09.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:46:09.604: INFO: namespace kubectl-5907 deletion completed in 6.101213555s • [SLOW TEST:6.314 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:46:09.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6815.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6815.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6815.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6815.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6815.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6815.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 20 15:46:15.716: INFO: DNS probes using dns-6815/dns-test-1b54f359-9d5e-4346-bcb5-781ff6eb512a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:46:15.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6815" for this suite. Apr 20 15:46:21.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:46:21.861: INFO: namespace dns-6815 deletion completed in 6.10487483s • [SLOW TEST:12.257 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:46:21.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 20 15:46:25.979: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:46:26.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8294" for this suite. Apr 20 15:46:32.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:46:32.140: INFO: namespace container-runtime-8294 deletion completed in 6.126700372s • [SLOW TEST:10.279 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:46:32.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 20 15:46:32.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5881' Apr 20 15:46:34.706: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 20 15:46:34.706: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Apr 20 15:46:34.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-5881' Apr 20 15:46:34.902: INFO: stderr: "" Apr 20 15:46:34.902: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:46:34.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5881" for this suite. Apr 20 15:46:40.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:46:41.014: INFO: namespace kubectl-5881 deletion completed in 6.109423158s • [SLOW TEST:8.874 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:46:41.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-bd44a570-2abd-4980-b7da-ed33bc6f5d74 STEP: Creating a pod to test consume configMaps Apr 20 15:46:41.107: INFO: Waiting up to 5m0s for pod "pod-configmaps-215215d5-dcad-4bc4-846b-cb00bc142613" in namespace "configmap-4136" to be "success or failure" Apr 20 15:46:41.116: INFO: Pod "pod-configmaps-215215d5-dcad-4bc4-846b-cb00bc142613": Phase="Pending", Reason="", readiness=false. Elapsed: 8.884069ms Apr 20 15:46:43.120: INFO: Pod "pod-configmaps-215215d5-dcad-4bc4-846b-cb00bc142613": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013160138s Apr 20 15:46:45.124: INFO: Pod "pod-configmaps-215215d5-dcad-4bc4-846b-cb00bc142613": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016903001s STEP: Saw pod success Apr 20 15:46:45.124: INFO: Pod "pod-configmaps-215215d5-dcad-4bc4-846b-cb00bc142613" satisfied condition "success or failure" Apr 20 15:46:45.127: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-215215d5-dcad-4bc4-846b-cb00bc142613 container configmap-volume-test: STEP: delete the pod Apr 20 15:46:45.162: INFO: Waiting for pod pod-configmaps-215215d5-dcad-4bc4-846b-cb00bc142613 to disappear Apr 20 15:46:45.176: INFO: Pod pod-configmaps-215215d5-dcad-4bc4-846b-cb00bc142613 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:46:45.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4136" for this suite. Apr 20 15:46:51.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:46:51.321: INFO: namespace configmap-4136 deletion completed in 6.141540386s • [SLOW TEST:10.306 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:46:51.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 20 15:46:51.398: INFO: Waiting up to 5m0s for pod "downward-api-a76efce1-a5f1-4e43-b0fa-b230792f7a31" in namespace "downward-api-3650" to be "success or failure" Apr 20 15:46:51.419: INFO: Pod "downward-api-a76efce1-a5f1-4e43-b0fa-b230792f7a31": Phase="Pending", Reason="", readiness=false. Elapsed: 21.012804ms Apr 20 15:46:53.422: INFO: Pod "downward-api-a76efce1-a5f1-4e43-b0fa-b230792f7a31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024366606s Apr 20 15:46:55.426: INFO: Pod "downward-api-a76efce1-a5f1-4e43-b0fa-b230792f7a31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028018628s STEP: Saw pod success Apr 20 15:46:55.426: INFO: Pod "downward-api-a76efce1-a5f1-4e43-b0fa-b230792f7a31" satisfied condition "success or failure" Apr 20 15:46:55.429: INFO: Trying to get logs from node iruya-worker pod downward-api-a76efce1-a5f1-4e43-b0fa-b230792f7a31 container dapi-container: STEP: delete the pod Apr 20 15:46:55.469: INFO: Waiting for pod downward-api-a76efce1-a5f1-4e43-b0fa-b230792f7a31 to disappear Apr 20 15:46:55.544: INFO: Pod downward-api-a76efce1-a5f1-4e43-b0fa-b230792f7a31 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:46:55.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3650" for this suite. Apr 20 15:47:01.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:47:01.689: INFO: namespace downward-api-3650 deletion completed in 6.14012402s • [SLOW TEST:10.368 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:47:01.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 20 15:47:01.787: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53347c89-edfd-4e46-802b-6422ff3ef574" in namespace "projected-6910" to be "success or failure" Apr 20 15:47:01.811: INFO: Pod "downwardapi-volume-53347c89-edfd-4e46-802b-6422ff3ef574": Phase="Pending", Reason="", readiness=false. Elapsed: 23.690774ms Apr 20 15:47:03.814: INFO: Pod "downwardapi-volume-53347c89-edfd-4e46-802b-6422ff3ef574": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027069051s Apr 20 15:47:05.818: INFO: Pod "downwardapi-volume-53347c89-edfd-4e46-802b-6422ff3ef574": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031438156s STEP: Saw pod success Apr 20 15:47:05.818: INFO: Pod "downwardapi-volume-53347c89-edfd-4e46-802b-6422ff3ef574" satisfied condition "success or failure" Apr 20 15:47:05.821: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-53347c89-edfd-4e46-802b-6422ff3ef574 container client-container: STEP: delete the pod Apr 20 15:47:05.863: INFO: Waiting for pod downwardapi-volume-53347c89-edfd-4e46-802b-6422ff3ef574 to disappear Apr 20 15:47:05.874: INFO: Pod downwardapi-volume-53347c89-edfd-4e46-802b-6422ff3ef574 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:47:05.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6910" for this suite. Apr 20 15:47:11.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:47:11.984: INFO: namespace projected-6910 deletion completed in 6.106565055s • [SLOW TEST:10.294 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:47:11.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:47:19.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7559" for this suite. Apr 20 15:47:41.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:47:41.227: INFO: namespace replication-controller-7559 deletion completed in 22.122525738s • [SLOW TEST:29.242 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:47:41.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-5621d8ce-1f92-4f58-a54b-fde80cb4c113 STEP: Creating a pod to test consume secrets Apr 20 15:47:41.612: INFO: Waiting up to 5m0s for pod "pod-secrets-02e0f45d-41ae-4a76-9e5d-2c277359e43b" in namespace "secrets-9416" to be "success or failure" Apr 20 15:47:41.767: INFO: Pod "pod-secrets-02e0f45d-41ae-4a76-9e5d-2c277359e43b": Phase="Pending", Reason="", readiness=false. Elapsed: 155.1395ms Apr 20 15:47:43.770: INFO: Pod "pod-secrets-02e0f45d-41ae-4a76-9e5d-2c277359e43b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158424725s Apr 20 15:47:45.774: INFO: Pod "pod-secrets-02e0f45d-41ae-4a76-9e5d-2c277359e43b": Phase="Running", Reason="", readiness=true. Elapsed: 4.162708647s Apr 20 15:47:47.779: INFO: Pod "pod-secrets-02e0f45d-41ae-4a76-9e5d-2c277359e43b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.166957608s STEP: Saw pod success Apr 20 15:47:47.779: INFO: Pod "pod-secrets-02e0f45d-41ae-4a76-9e5d-2c277359e43b" satisfied condition "success or failure" Apr 20 15:47:47.782: INFO: Trying to get logs from node iruya-worker pod pod-secrets-02e0f45d-41ae-4a76-9e5d-2c277359e43b container secret-volume-test: STEP: delete the pod Apr 20 15:47:47.813: INFO: Waiting for pod pod-secrets-02e0f45d-41ae-4a76-9e5d-2c277359e43b to disappear Apr 20 15:47:47.827: INFO: Pod pod-secrets-02e0f45d-41ae-4a76-9e5d-2c277359e43b no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:47:47.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9416" for this suite. Apr 20 15:47:53.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:47:53.931: INFO: namespace secrets-9416 deletion completed in 6.100283051s • [SLOW TEST:12.703 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:47:53.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0420 15:48:04.047688 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 20 15:48:04.047: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:48:04.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1498" for this suite. Apr 20 15:48:14.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:48:14.463: INFO: namespace gc-1498 deletion completed in 10.412156745s • [SLOW TEST:20.531 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:48:14.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-e5fc4fa1-ba55-48e1-a799-e318bd234471 STEP: Creating secret with name s-test-opt-upd-fbb121db-7a15-4710-b137-88fe1cdd5e7b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e5fc4fa1-ba55-48e1-a799-e318bd234471 STEP: Updating secret s-test-opt-upd-fbb121db-7a15-4710-b137-88fe1cdd5e7b STEP: Creating secret with name s-test-opt-create-9665ef76-8dd3-430d-9c08-4a080876e421 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:49:46.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6838" for this suite. Apr 20 15:50:08.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:50:08.876: INFO: namespace projected-6838 deletion completed in 22.14735477s • [SLOW TEST:114.413 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:50:08.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-89wd STEP: Creating a pod to test atomic-volume-subpath Apr 20 15:50:09.039: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-89wd" in namespace "subpath-1115" to be "success or failure" Apr 20 15:50:09.042: INFO: Pod "pod-subpath-test-projected-89wd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.939853ms Apr 20 15:50:11.104: INFO: Pod "pod-subpath-test-projected-89wd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064896689s Apr 20 15:50:13.109: INFO: Pod "pod-subpath-test-projected-89wd": Phase="Running", Reason="", readiness=true. Elapsed: 4.069080998s Apr 20 15:50:15.112: INFO: Pod "pod-subpath-test-projected-89wd": Phase="Running", Reason="", readiness=true. Elapsed: 6.072863325s Apr 20 15:50:17.134: INFO: Pod "pod-subpath-test-projected-89wd": Phase="Running", Reason="", readiness=true. Elapsed: 8.094642913s Apr 20 15:50:19.138: INFO: Pod "pod-subpath-test-projected-89wd": Phase="Running", Reason="", readiness=true. Elapsed: 10.098850584s Apr 20 15:50:21.142: INFO: Pod "pod-subpath-test-projected-89wd": Phase="Running", Reason="", readiness=true. Elapsed: 12.102694985s Apr 20 15:50:23.147: INFO: Pod "pod-subpath-test-projected-89wd": Phase="Running", Reason="", readiness=true. Elapsed: 14.107052212s Apr 20 15:50:25.151: INFO: Pod "pod-subpath-test-projected-89wd": Phase="Running", Reason="", readiness=true. Elapsed: 16.111139432s Apr 20 15:50:27.155: INFO: Pod "pod-subpath-test-projected-89wd": Phase="Running", Reason="", readiness=true. Elapsed: 18.115293721s Apr 20 15:50:29.160: INFO: Pod "pod-subpath-test-projected-89wd": Phase="Running", Reason="", readiness=true. Elapsed: 20.120127131s Apr 20 15:50:31.212: INFO: Pod "pod-subpath-test-projected-89wd": Phase="Running", Reason="", readiness=true. Elapsed: 22.172595998s Apr 20 15:50:33.216: INFO: Pod "pod-subpath-test-projected-89wd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.176218741s STEP: Saw pod success Apr 20 15:50:33.216: INFO: Pod "pod-subpath-test-projected-89wd" satisfied condition "success or failure" Apr 20 15:50:33.218: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-89wd container test-container-subpath-projected-89wd: STEP: delete the pod Apr 20 15:50:33.238: INFO: Waiting for pod pod-subpath-test-projected-89wd to disappear Apr 20 15:50:33.248: INFO: Pod pod-subpath-test-projected-89wd no longer exists STEP: Deleting pod pod-subpath-test-projected-89wd Apr 20 15:50:33.248: INFO: Deleting pod "pod-subpath-test-projected-89wd" in namespace "subpath-1115" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:50:33.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1115" for this suite. Apr 20 15:50:39.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:50:39.361: INFO: namespace subpath-1115 deletion completed in 6.106603026s • [SLOW TEST:30.484 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:50:39.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Apr 20 15:50:39.434: INFO: Waiting up to 5m0s for pod "var-expansion-8b171c57-9914-4c6b-9e39-393558238862" in namespace "var-expansion-7000" to be "success or failure" Apr 20 15:50:39.437: INFO: Pod "var-expansion-8b171c57-9914-4c6b-9e39-393558238862": Phase="Pending", Reason="", readiness=false. Elapsed: 3.011508ms Apr 20 15:50:41.441: INFO: Pod "var-expansion-8b171c57-9914-4c6b-9e39-393558238862": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007133159s Apr 20 15:50:43.446: INFO: Pod "var-expansion-8b171c57-9914-4c6b-9e39-393558238862": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011524999s STEP: Saw pod success Apr 20 15:50:43.446: INFO: Pod "var-expansion-8b171c57-9914-4c6b-9e39-393558238862" satisfied condition "success or failure" Apr 20 15:50:43.448: INFO: Trying to get logs from node iruya-worker pod var-expansion-8b171c57-9914-4c6b-9e39-393558238862 container dapi-container: STEP: delete the pod Apr 20 15:50:43.466: INFO: Waiting for pod var-expansion-8b171c57-9914-4c6b-9e39-393558238862 to disappear Apr 20 15:50:43.486: INFO: Pod var-expansion-8b171c57-9914-4c6b-9e39-393558238862 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:50:43.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7000" for this suite. Apr 20 15:50:49.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:50:49.639: INFO: namespace var-expansion-7000 deletion completed in 6.149583547s • [SLOW TEST:10.277 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:50:49.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 20 15:50:49.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-7883' Apr 20 15:50:49.778: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 20 15:50:49.778: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Apr 20 15:50:51.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7883' Apr 20 15:50:51.989: INFO: stderr: "" Apr 20 15:50:51.989: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:50:51.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7883" for this suite. Apr 20 15:51:14.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:51:14.094: INFO: namespace kubectl-7883 deletion completed in 22.100756174s • [SLOW TEST:24.455 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:51:14.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-ls7zp in namespace proxy-8221 I0420 15:51:14.262392 6 runners.go:180] Created replication controller with name: proxy-service-ls7zp, namespace: proxy-8221, replica count: 1 I0420 15:51:15.312770 6 runners.go:180] proxy-service-ls7zp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0420 15:51:16.313042 6 runners.go:180] proxy-service-ls7zp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0420 15:51:17.313308 6 runners.go:180] proxy-service-ls7zp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0420 15:51:18.313574 6 runners.go:180] proxy-service-ls7zp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0420 15:51:19.313807 6 runners.go:180] proxy-service-ls7zp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0420 15:51:20.314079 6 runners.go:180] proxy-service-ls7zp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0420 15:51:21.314268 6 runners.go:180] proxy-service-ls7zp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0420 15:51:22.314514 6 runners.go:180] proxy-service-ls7zp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0420 15:51:23.314743 6 runners.go:180] proxy-service-ls7zp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0420 15:51:24.315028 6 runners.go:180] proxy-service-ls7zp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0420 15:51:25.315242 6 runners.go:180] proxy-service-ls7zp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0420 15:51:26.315451 6 runners.go:180] proxy-service-ls7zp Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 20 15:51:26.318: INFO: setup took 12.165354346s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 20 15:51:26.327: INFO: (0) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname2/proxy/: bar (200; 8.661149ms) Apr 20 15:51:26.327: INFO: (0) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname1/proxy/: foo (200; 8.509409ms) Apr 20 15:51:26.327: INFO: (0) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 8.61383ms) Apr 20 15:51:26.327: INFO: (0) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 8.785331ms) Apr 20 15:51:26.327: INFO: (0) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname1/proxy/: foo (200; 8.798342ms) Apr 20 15:51:26.327: INFO: (0) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 9.431737ms) Apr 20 15:51:26.327: INFO: (0) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6/proxy/: test (200; 9.373095ms) Apr 20 15:51:26.327: INFO: (0) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 9.427801ms) Apr 20 15:51:26.327: INFO: (0) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:1080/proxy/: ... (200; 9.439346ms) Apr 20 15:51:26.327: INFO: (0) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname2/proxy/: bar (200; 9.512214ms) Apr 20 15:51:26.328: INFO: (0) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:1080/proxy/: test<... (200; 9.608437ms) Apr 20 15:51:26.332: INFO: (0) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:462/proxy/: tls qux (200; 13.938685ms) Apr 20 15:51:26.333: INFO: (0) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:443/proxy/: test<... (200; 14.771257ms) Apr 20 15:51:26.354: INFO: (1) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6/proxy/: test (200; 14.97372ms) Apr 20 15:51:26.354: INFO: (1) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname2/proxy/: bar (200; 15.046494ms) Apr 20 15:51:26.354: INFO: (1) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:1080/proxy/: ... (200; 15.139875ms) Apr 20 15:51:26.354: INFO: (1) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname1/proxy/: tls baz (200; 15.252803ms) Apr 20 15:51:26.354: INFO: (1) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:460/proxy/: tls baz (200; 15.248085ms) Apr 20 15:51:26.354: INFO: (1) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 15.473412ms) Apr 20 15:51:26.354: INFO: (1) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname1/proxy/: foo (200; 15.919899ms) Apr 20 15:51:26.354: INFO: (1) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname2/proxy/: bar (200; 15.96184ms) Apr 20 15:51:26.354: INFO: (1) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 15.817124ms) Apr 20 15:51:26.355: INFO: (1) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname1/proxy/: foo (200; 16.068592ms) Apr 20 15:51:26.355: INFO: (1) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname2/proxy/: tls qux (200; 16.395905ms) Apr 20 15:51:26.355: INFO: (1) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:462/proxy/: tls qux (200; 16.446566ms) Apr 20 15:51:26.363: INFO: (2) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6/proxy/: test (200; 8.161295ms) Apr 20 15:51:26.370: INFO: (2) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 14.795464ms) Apr 20 15:51:26.370: INFO: (2) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:1080/proxy/: test<... (200; 15.05225ms) Apr 20 15:51:26.370: INFO: (2) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 15.049941ms) Apr 20 15:51:26.370: INFO: (2) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:443/proxy/: ... (200; 15.159892ms) Apr 20 15:51:26.371: INFO: (2) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname1/proxy/: foo (200; 15.618857ms) Apr 20 15:51:26.371: INFO: (2) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname2/proxy/: bar (200; 15.60455ms) Apr 20 15:51:26.371: INFO: (2) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname1/proxy/: foo (200; 15.75052ms) Apr 20 15:51:26.371: INFO: (2) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname2/proxy/: tls qux (200; 15.954307ms) Apr 20 15:51:26.371: INFO: (2) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname1/proxy/: tls baz (200; 15.972622ms) Apr 20 15:51:26.371: INFO: (2) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:460/proxy/: tls baz (200; 15.969562ms) Apr 20 15:51:26.372: INFO: (2) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname2/proxy/: bar (200; 16.917769ms) Apr 20 15:51:26.375: INFO: (3) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:462/proxy/: tls qux (200; 3.208902ms) Apr 20 15:51:26.375: INFO: (3) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname1/proxy/: foo (200; 3.490263ms) Apr 20 15:51:26.376: INFO: (3) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 3.970085ms) Apr 20 15:51:26.376: INFO: (3) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:460/proxy/: tls baz (200; 4.099282ms) Apr 20 15:51:26.376: INFO: (3) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:443/proxy/: test (200; 4.541565ms) Apr 20 15:51:26.377: INFO: (3) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 4.57716ms) Apr 20 15:51:26.377: INFO: (3) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname2/proxy/: bar (200; 4.680172ms) Apr 20 15:51:26.377: INFO: (3) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 4.733247ms) Apr 20 15:51:26.377: INFO: (3) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname2/proxy/: tls qux (200; 4.907649ms) Apr 20 15:51:26.377: INFO: (3) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname1/proxy/: foo (200; 4.955155ms) Apr 20 15:51:26.377: INFO: (3) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname1/proxy/: tls baz (200; 4.917549ms) Apr 20 15:51:26.377: INFO: (3) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname2/proxy/: bar (200; 5.136412ms) Apr 20 15:51:26.377: INFO: (3) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:1080/proxy/: test<... (200; 5.405638ms) Apr 20 15:51:26.377: INFO: (3) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:1080/proxy/: ... (200; 5.460218ms) Apr 20 15:51:26.377: INFO: (3) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 5.465249ms) Apr 20 15:51:26.380: INFO: (4) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 2.619159ms) Apr 20 15:51:26.381: INFO: (4) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:462/proxy/: tls qux (200; 3.064067ms) Apr 20 15:51:26.381: INFO: (4) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 3.763075ms) Apr 20 15:51:26.381: INFO: (4) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:1080/proxy/: ... (200; 3.856054ms) Apr 20 15:51:26.382: INFO: (4) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname2/proxy/: bar (200; 3.956635ms) Apr 20 15:51:26.382: INFO: (4) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname1/proxy/: foo (200; 4.357202ms) Apr 20 15:51:26.382: INFO: (4) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6/proxy/: test (200; 4.422549ms) Apr 20 15:51:26.382: INFO: (4) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname1/proxy/: foo (200; 4.502362ms) Apr 20 15:51:26.383: INFO: (4) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:460/proxy/: tls baz (200; 4.876052ms) Apr 20 15:51:26.383: INFO: (4) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname2/proxy/: tls qux (200; 5.02724ms) Apr 20 15:51:26.383: INFO: (4) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:443/proxy/: test<... (200; 5.172263ms) Apr 20 15:51:26.383: INFO: (4) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname2/proxy/: bar (200; 5.152868ms) Apr 20 15:51:26.386: INFO: (5) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:1080/proxy/: test<... (200; 2.581348ms) Apr 20 15:51:26.387: INFO: (5) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:460/proxy/: tls baz (200; 3.895577ms) Apr 20 15:51:26.388: INFO: (5) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname2/proxy/: bar (200; 4.919322ms) Apr 20 15:51:26.388: INFO: (5) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6/proxy/: test (200; 4.941495ms) Apr 20 15:51:26.388: INFO: (5) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 4.982075ms) Apr 20 15:51:26.388: INFO: (5) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 5.019347ms) Apr 20 15:51:26.388: INFO: (5) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:443/proxy/: ... (200; 5.248726ms) Apr 20 15:51:26.388: INFO: (5) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname2/proxy/: bar (200; 5.287349ms) Apr 20 15:51:26.388: INFO: (5) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:462/proxy/: tls qux (200; 5.241645ms) Apr 20 15:51:26.388: INFO: (5) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname1/proxy/: tls baz (200; 5.25822ms) Apr 20 15:51:26.388: INFO: (5) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 5.271473ms) Apr 20 15:51:26.388: INFO: (5) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 5.324241ms) Apr 20 15:51:26.388: INFO: (5) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname1/proxy/: foo (200; 5.294068ms) Apr 20 15:51:26.391: INFO: (6) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 2.506519ms) Apr 20 15:51:26.391: INFO: (6) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:1080/proxy/: ... (200; 2.589154ms) Apr 20 15:51:26.391: INFO: (6) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:1080/proxy/: test<... (200; 2.554499ms) Apr 20 15:51:26.392: INFO: (6) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6/proxy/: test (200; 3.556119ms) Apr 20 15:51:26.392: INFO: (6) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:460/proxy/: tls baz (200; 3.83949ms) Apr 20 15:51:26.392: INFO: (6) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 3.891787ms) Apr 20 15:51:26.392: INFO: (6) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 3.89442ms) Apr 20 15:51:26.392: INFO: (6) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname2/proxy/: bar (200; 3.883185ms) Apr 20 15:51:26.392: INFO: (6) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname1/proxy/: foo (200; 3.966924ms) Apr 20 15:51:26.392: INFO: (6) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 3.956542ms) Apr 20 15:51:26.392: INFO: (6) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:443/proxy/: test (200; 1.781651ms) Apr 20 15:51:26.395: INFO: (7) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:462/proxy/: tls qux (200; 1.868222ms) Apr 20 15:51:26.395: INFO: (7) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname2/proxy/: tls qux (200; 2.742771ms) Apr 20 15:51:26.395: INFO: (7) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 2.704636ms) Apr 20 15:51:26.395: INFO: (7) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:1080/proxy/: test<... (200; 2.715247ms) Apr 20 15:51:26.397: INFO: (7) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 3.96229ms) Apr 20 15:51:26.397: INFO: (7) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 3.987834ms) Apr 20 15:51:26.397: INFO: (7) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:1080/proxy/: ... (200; 4.014996ms) Apr 20 15:51:26.397: INFO: (7) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname1/proxy/: foo (200; 4.118818ms) Apr 20 15:51:26.397: INFO: (7) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:443/proxy/: ... (200; 3.528034ms) Apr 20 15:51:26.401: INFO: (8) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname2/proxy/: tls qux (200; 3.562294ms) Apr 20 15:51:26.401: INFO: (8) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 3.540649ms) Apr 20 15:51:26.401: INFO: (8) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 3.544343ms) Apr 20 15:51:26.401: INFO: (8) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6/proxy/: test (200; 3.556448ms) Apr 20 15:51:26.401: INFO: (8) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname2/proxy/: bar (200; 3.580645ms) Apr 20 15:51:26.401: INFO: (8) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname1/proxy/: foo (200; 3.607164ms) Apr 20 15:51:26.401: INFO: (8) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:1080/proxy/: test<... (200; 3.680521ms) Apr 20 15:51:26.401: INFO: (8) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname2/proxy/: bar (200; 3.646336ms) Apr 20 15:51:26.401: INFO: (8) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:460/proxy/: tls baz (200; 3.683605ms) Apr 20 15:51:26.401: INFO: (8) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 3.690603ms) Apr 20 15:51:26.401: INFO: (8) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname1/proxy/: foo (200; 3.749887ms) Apr 20 15:51:26.401: INFO: (8) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 3.706163ms) Apr 20 15:51:26.401: INFO: (8) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:443/proxy/: test (200; 21.028891ms) Apr 20 15:51:26.423: INFO: (9) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 21.096245ms) Apr 20 15:51:26.423: INFO: (9) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:1080/proxy/: ... (200; 21.292373ms) Apr 20 15:51:26.423: INFO: (9) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:462/proxy/: tls qux (200; 21.283857ms) Apr 20 15:51:26.423: INFO: (9) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 21.325353ms) Apr 20 15:51:26.423: INFO: (9) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 21.344815ms) Apr 20 15:51:26.423: INFO: (9) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 21.518782ms) Apr 20 15:51:26.423: INFO: (9) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:1080/proxy/: test<... (200; 21.585609ms) Apr 20 15:51:26.423: INFO: (9) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:443/proxy/: ... (200; 3.7966ms) Apr 20 15:51:26.429: INFO: (10) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 4.1926ms) Apr 20 15:51:26.430: INFO: (10) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:460/proxy/: tls baz (200; 4.870941ms) Apr 20 15:51:26.430: INFO: (10) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6/proxy/: test (200; 4.89925ms) Apr 20 15:51:26.430: INFO: (10) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname2/proxy/: bar (200; 4.979667ms) Apr 20 15:51:26.430: INFO: (10) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:443/proxy/: test<... (200; 5.463547ms) Apr 20 15:51:26.431: INFO: (10) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname1/proxy/: foo (200; 5.949673ms) Apr 20 15:51:26.431: INFO: (10) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname1/proxy/: tls baz (200; 6.213254ms) Apr 20 15:51:26.434: INFO: (11) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 2.519805ms) Apr 20 15:51:26.438: INFO: (11) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:443/proxy/: test<... (200; 6.738313ms) Apr 20 15:51:26.438: INFO: (11) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6/proxy/: test (200; 6.786303ms) Apr 20 15:51:26.438: INFO: (11) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:462/proxy/: tls qux (200; 6.863036ms) Apr 20 15:51:26.438: INFO: (11) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:1080/proxy/: ... (200; 6.972311ms) Apr 20 15:51:26.439: INFO: (11) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname2/proxy/: bar (200; 8.062625ms) Apr 20 15:51:26.440: INFO: (11) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname1/proxy/: tls baz (200; 8.12514ms) Apr 20 15:51:26.440: INFO: (11) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname1/proxy/: foo (200; 8.09157ms) Apr 20 15:51:26.440: INFO: (11) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname2/proxy/: tls qux (200; 8.115677ms) Apr 20 15:51:26.440: INFO: (11) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname1/proxy/: foo (200; 8.055933ms) Apr 20 15:51:26.440: INFO: (11) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname2/proxy/: bar (200; 8.06942ms) Apr 20 15:51:26.442: INFO: (12) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 2.808394ms) Apr 20 15:51:26.442: INFO: (12) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 2.608752ms) Apr 20 15:51:26.442: INFO: (12) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 2.884552ms) Apr 20 15:51:26.443: INFO: (12) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:443/proxy/: ... (200; 3.429058ms) Apr 20 15:51:26.443: INFO: (12) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 3.42014ms) Apr 20 15:51:26.444: INFO: (12) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:462/proxy/: tls qux (200; 3.782335ms) Apr 20 15:51:26.444: INFO: (12) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname1/proxy/: foo (200; 3.790021ms) Apr 20 15:51:26.444: INFO: (12) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:460/proxy/: tls baz (200; 3.965388ms) Apr 20 15:51:26.444: INFO: (12) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:1080/proxy/: test<... (200; 3.894945ms) Apr 20 15:51:26.444: INFO: (12) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6/proxy/: test (200; 4.504051ms) Apr 20 15:51:26.445: INFO: (12) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname1/proxy/: foo (200; 5.21796ms) Apr 20 15:51:26.445: INFO: (12) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname1/proxy/: tls baz (200; 5.098776ms) Apr 20 15:51:26.445: INFO: (12) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname2/proxy/: bar (200; 5.246739ms) Apr 20 15:51:26.445: INFO: (12) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname2/proxy/: tls qux (200; 5.343387ms) Apr 20 15:51:26.445: INFO: (12) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname2/proxy/: bar (200; 5.249564ms) Apr 20 15:51:26.448: INFO: (13) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:460/proxy/: tls baz (200; 3.099851ms) Apr 20 15:51:26.449: INFO: (13) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 4.172799ms) Apr 20 15:51:26.449: INFO: (13) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6/proxy/: test (200; 4.201278ms) Apr 20 15:51:26.449: INFO: (13) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:1080/proxy/: ... (200; 4.176278ms) Apr 20 15:51:26.450: INFO: (13) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:443/proxy/: test<... (200; 4.725353ms) Apr 20 15:51:26.450: INFO: (13) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 4.863341ms) Apr 20 15:51:26.450: INFO: (13) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:462/proxy/: tls qux (200; 5.190775ms) Apr 20 15:51:26.450: INFO: (13) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname1/proxy/: foo (200; 5.210965ms) Apr 20 15:51:26.450: INFO: (13) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname1/proxy/: tls baz (200; 5.227789ms) Apr 20 15:51:26.450: INFO: (13) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname1/proxy/: foo (200; 5.26178ms) Apr 20 15:51:26.450: INFO: (13) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname2/proxy/: bar (200; 5.32157ms) Apr 20 15:51:26.450: INFO: (13) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname2/proxy/: tls qux (200; 5.259906ms) Apr 20 15:51:26.453: INFO: (14) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 2.037584ms) Apr 20 15:51:26.453: INFO: (14) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 2.065834ms) Apr 20 15:51:26.455: INFO: (14) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:1080/proxy/: test<... (200; 4.015747ms) Apr 20 15:51:26.455: INFO: (14) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 4.43559ms) Apr 20 15:51:26.455: INFO: (14) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:460/proxy/: tls baz (200; 4.404626ms) Apr 20 15:51:26.455: INFO: (14) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:462/proxy/: tls qux (200; 4.444609ms) Apr 20 15:51:26.455: INFO: (14) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname2/proxy/: bar (200; 4.509384ms) Apr 20 15:51:26.455: INFO: (14) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6/proxy/: test (200; 4.513089ms) Apr 20 15:51:26.455: INFO: (14) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 4.506893ms) Apr 20 15:51:26.455: INFO: (14) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:1080/proxy/: ... (200; 4.596774ms) Apr 20 15:51:26.455: INFO: (14) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:443/proxy/: test<... (200; 4.313382ms) Apr 20 15:51:26.461: INFO: (15) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:1080/proxy/: ... (200; 4.349096ms) Apr 20 15:51:26.461: INFO: (15) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:462/proxy/: tls qux (200; 4.34295ms) Apr 20 15:51:26.461: INFO: (15) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6/proxy/: test (200; 4.425494ms) Apr 20 15:51:26.463: INFO: (15) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname2/proxy/: bar (200; 6.802739ms) Apr 20 15:51:26.463: INFO: (15) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname1/proxy/: foo (200; 6.860268ms) Apr 20 15:51:26.463: INFO: (15) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname2/proxy/: tls qux (200; 6.828889ms) Apr 20 15:51:26.463: INFO: (15) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname2/proxy/: bar (200; 7.193798ms) Apr 20 15:51:26.463: INFO: (15) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname1/proxy/: tls baz (200; 7.191601ms) Apr 20 15:51:26.465: INFO: (16) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:1080/proxy/: ... (200; 2.002845ms) Apr 20 15:51:26.468: INFO: (16) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 4.037615ms) Apr 20 15:51:26.468: INFO: (16) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6/proxy/: test (200; 4.222337ms) Apr 20 15:51:26.468: INFO: (16) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:1080/proxy/: test<... (200; 4.420486ms) Apr 20 15:51:26.468: INFO: (16) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:462/proxy/: tls qux (200; 4.548739ms) Apr 20 15:51:26.469: INFO: (16) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname1/proxy/: foo (200; 5.130726ms) Apr 20 15:51:26.469: INFO: (16) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname2/proxy/: bar (200; 5.402326ms) Apr 20 15:51:26.469: INFO: (16) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:443/proxy/: test<... (200; 4.539069ms) Apr 20 15:51:26.474: INFO: (17) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6/proxy/: test (200; 4.700796ms) Apr 20 15:51:26.474: INFO: (17) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:1080/proxy/: ... (200; 4.854305ms) Apr 20 15:51:26.475: INFO: (17) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 4.891292ms) Apr 20 15:51:26.475: INFO: (17) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 5.474599ms) Apr 20 15:51:26.475: INFO: (17) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:462/proxy/: tls qux (200; 5.343497ms) Apr 20 15:51:26.475: INFO: (17) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 5.44808ms) Apr 20 15:51:26.475: INFO: (17) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:460/proxy/: tls baz (200; 5.539611ms) Apr 20 15:51:26.475: INFO: (17) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 5.699594ms) Apr 20 15:51:26.475: INFO: (17) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:443/proxy/: test (200; 3.949859ms) Apr 20 15:51:26.482: INFO: (18) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 3.978902ms) Apr 20 15:51:26.482: INFO: (18) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:1080/proxy/: test<... (200; 4.111041ms) Apr 20 15:51:26.482: INFO: (18) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname1/proxy/: tls baz (200; 4.162249ms) Apr 20 15:51:26.482: INFO: (18) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:460/proxy/: tls baz (200; 4.181752ms) Apr 20 15:51:26.482: INFO: (18) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:462/proxy/: tls qux (200; 4.187619ms) Apr 20 15:51:26.482: INFO: (18) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:1080/proxy/: ... (200; 4.250356ms) Apr 20 15:51:26.482: INFO: (18) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 4.310131ms) Apr 20 15:51:26.482: INFO: (18) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname1/proxy/: foo (200; 4.509308ms) Apr 20 15:51:26.482: INFO: (18) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname2/proxy/: bar (200; 4.568189ms) Apr 20 15:51:26.482: INFO: (18) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname1/proxy/: foo (200; 4.562542ms) Apr 20 15:51:26.483: INFO: (18) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname2/proxy/: bar (200; 4.974755ms) Apr 20 15:51:26.483: INFO: (18) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:443/proxy/: ... (200; 3.027046ms) Apr 20 15:51:26.486: INFO: (19) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 3.409326ms) Apr 20 15:51:26.486: INFO: (19) /api/v1/namespaces/proxy-8221/pods/http:proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 3.445619ms) Apr 20 15:51:26.487: INFO: (19) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:162/proxy/: bar (200; 4.599186ms) Apr 20 15:51:26.488: INFO: (19) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:160/proxy/: foo (200; 5.285643ms) Apr 20 15:51:26.488: INFO: (19) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname2/proxy/: tls qux (200; 5.26052ms) Apr 20 15:51:26.488: INFO: (19) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6:1080/proxy/: test<... (200; 5.304946ms) Apr 20 15:51:26.488: INFO: (19) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname1/proxy/: foo (200; 5.345345ms) Apr 20 15:51:26.488: INFO: (19) /api/v1/namespaces/proxy-8221/pods/proxy-service-ls7zp-ltqq6/proxy/: test (200; 5.373954ms) Apr 20 15:51:26.488: INFO: (19) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname1/proxy/: foo (200; 5.378216ms) Apr 20 15:51:26.488: INFO: (19) /api/v1/namespaces/proxy-8221/services/http:proxy-service-ls7zp:portname2/proxy/: bar (200; 5.353755ms) Apr 20 15:51:26.488: INFO: (19) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:460/proxy/: tls baz (200; 5.408118ms) Apr 20 15:51:26.488: INFO: (19) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:462/proxy/: tls qux (200; 5.424872ms) Apr 20 15:51:26.488: INFO: (19) /api/v1/namespaces/proxy-8221/services/https:proxy-service-ls7zp:tlsportname1/proxy/: tls baz (200; 5.447177ms) Apr 20 15:51:26.488: INFO: (19) /api/v1/namespaces/proxy-8221/services/proxy-service-ls7zp:portname2/proxy/: bar (200; 5.458009ms) Apr 20 15:51:26.488: INFO: (19) /api/v1/namespaces/proxy-8221/pods/https:proxy-service-ls7zp-ltqq6:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-wclf STEP: Creating a pod to test atomic-volume-subpath Apr 20 15:51:35.968: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-wclf" in namespace "subpath-6240" to be "success or failure" Apr 20 15:51:35.979: INFO: Pod "pod-subpath-test-secret-wclf": Phase="Pending", Reason="", readiness=false. Elapsed: 11.034445ms Apr 20 15:51:38.094: INFO: Pod "pod-subpath-test-secret-wclf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125438951s Apr 20 15:51:40.097: INFO: Pod "pod-subpath-test-secret-wclf": Phase="Running", Reason="", readiness=true. Elapsed: 4.129344714s Apr 20 15:51:42.102: INFO: Pod "pod-subpath-test-secret-wclf": Phase="Running", Reason="", readiness=true. Elapsed: 6.133538688s Apr 20 15:51:44.106: INFO: Pod "pod-subpath-test-secret-wclf": Phase="Running", Reason="", readiness=true. Elapsed: 8.137548725s Apr 20 15:51:46.110: INFO: Pod "pod-subpath-test-secret-wclf": Phase="Running", Reason="", readiness=true. Elapsed: 10.141785937s Apr 20 15:51:48.114: INFO: Pod "pod-subpath-test-secret-wclf": Phase="Running", Reason="", readiness=true. Elapsed: 12.146287575s Apr 20 15:51:50.118: INFO: Pod "pod-subpath-test-secret-wclf": Phase="Running", Reason="", readiness=true. Elapsed: 14.150387354s Apr 20 15:51:52.130: INFO: Pod "pod-subpath-test-secret-wclf": Phase="Running", Reason="", readiness=true. Elapsed: 16.161518729s Apr 20 15:51:54.133: INFO: Pod "pod-subpath-test-secret-wclf": Phase="Running", Reason="", readiness=true. Elapsed: 18.165373907s Apr 20 15:51:56.137: INFO: Pod "pod-subpath-test-secret-wclf": Phase="Running", Reason="", readiness=true. Elapsed: 20.169087941s Apr 20 15:51:58.141: INFO: Pod "pod-subpath-test-secret-wclf": Phase="Running", Reason="", readiness=true. Elapsed: 22.173306675s Apr 20 15:52:00.145: INFO: Pod "pod-subpath-test-secret-wclf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.177160825s STEP: Saw pod success Apr 20 15:52:00.145: INFO: Pod "pod-subpath-test-secret-wclf" satisfied condition "success or failure" Apr 20 15:52:00.148: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-wclf container test-container-subpath-secret-wclf: STEP: delete the pod Apr 20 15:52:00.174: INFO: Waiting for pod pod-subpath-test-secret-wclf to disappear Apr 20 15:52:00.178: INFO: Pod pod-subpath-test-secret-wclf no longer exists STEP: Deleting pod pod-subpath-test-secret-wclf Apr 20 15:52:00.178: INFO: Deleting pod "pod-subpath-test-secret-wclf" in namespace "subpath-6240" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:52:00.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6240" for this suite. Apr 20 15:52:06.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:52:06.287: INFO: namespace subpath-6240 deletion completed in 6.102903532s • [SLOW TEST:30.404 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:52:06.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Apr 20 15:52:06.348: INFO: Waiting up to 5m0s for pod "client-containers-e512fdd5-d9e3-4f40-8fb0-672d3ea879c5" in namespace "containers-4413" to be "success or failure" Apr 20 15:52:06.370: INFO: Pod "client-containers-e512fdd5-d9e3-4f40-8fb0-672d3ea879c5": Phase="Pending", Reason="", readiness=false. Elapsed: 21.303613ms Apr 20 15:52:08.374: INFO: Pod "client-containers-e512fdd5-d9e3-4f40-8fb0-672d3ea879c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025597051s Apr 20 15:52:10.378: INFO: Pod "client-containers-e512fdd5-d9e3-4f40-8fb0-672d3ea879c5": Phase="Running", Reason="", readiness=true. Elapsed: 4.029880014s Apr 20 15:52:12.382: INFO: Pod "client-containers-e512fdd5-d9e3-4f40-8fb0-672d3ea879c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034122277s STEP: Saw pod success Apr 20 15:52:12.382: INFO: Pod "client-containers-e512fdd5-d9e3-4f40-8fb0-672d3ea879c5" satisfied condition "success or failure" Apr 20 15:52:12.386: INFO: Trying to get logs from node iruya-worker pod client-containers-e512fdd5-d9e3-4f40-8fb0-672d3ea879c5 container test-container: STEP: delete the pod Apr 20 15:52:12.473: INFO: Waiting for pod client-containers-e512fdd5-d9e3-4f40-8fb0-672d3ea879c5 to disappear Apr 20 15:52:12.477: INFO: Pod client-containers-e512fdd5-d9e3-4f40-8fb0-672d3ea879c5 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:52:12.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4413" for this suite. Apr 20 15:52:18.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:52:18.589: INFO: namespace containers-4413 deletion completed in 6.108553326s • [SLOW TEST:12.301 seconds] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:52:18.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-3201/secret-test-2276f2ce-ab4e-4b40-aca5-3ba3078ed758 STEP: Creating a pod to test consume secrets Apr 20 15:52:18.659: INFO: Waiting up to 5m0s for pod "pod-configmaps-1bd8f02b-1767-4f70-b3b1-45b6c634b175" in namespace "secrets-3201" to be "success or failure" Apr 20 15:52:18.688: INFO: Pod "pod-configmaps-1bd8f02b-1767-4f70-b3b1-45b6c634b175": Phase="Pending", Reason="", readiness=false. Elapsed: 29.242939ms Apr 20 15:52:20.692: INFO: Pod "pod-configmaps-1bd8f02b-1767-4f70-b3b1-45b6c634b175": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032938s Apr 20 15:52:22.696: INFO: Pod "pod-configmaps-1bd8f02b-1767-4f70-b3b1-45b6c634b175": Phase="Running", Reason="", readiness=true. Elapsed: 4.036629589s Apr 20 15:52:24.700: INFO: Pod "pod-configmaps-1bd8f02b-1767-4f70-b3b1-45b6c634b175": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041305208s STEP: Saw pod success Apr 20 15:52:24.700: INFO: Pod "pod-configmaps-1bd8f02b-1767-4f70-b3b1-45b6c634b175" satisfied condition "success or failure" Apr 20 15:52:24.704: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-1bd8f02b-1767-4f70-b3b1-45b6c634b175 container env-test: STEP: delete the pod Apr 20 15:52:24.724: INFO: Waiting for pod pod-configmaps-1bd8f02b-1767-4f70-b3b1-45b6c634b175 to disappear Apr 20 15:52:24.729: INFO: Pod pod-configmaps-1bd8f02b-1767-4f70-b3b1-45b6c634b175 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:52:24.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3201" for this suite. Apr 20 15:52:30.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:52:30.856: INFO: namespace secrets-3201 deletion completed in 6.102527543s • [SLOW TEST:12.266 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:52:30.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-5798ff5f-be76-4bf8-a7bc-558f9c382c72 STEP: Creating a pod to test consume configMaps Apr 20 15:52:30.932: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6b0925c2-7635-45c8-b00f-0a87fe1e0f87" in namespace "projected-715" to be "success or failure" Apr 20 15:52:30.939: INFO: Pod "pod-projected-configmaps-6b0925c2-7635-45c8-b00f-0a87fe1e0f87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.464113ms Apr 20 15:52:32.942: INFO: Pod "pod-projected-configmaps-6b0925c2-7635-45c8-b00f-0a87fe1e0f87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010227469s Apr 20 15:52:34.947: INFO: Pod "pod-projected-configmaps-6b0925c2-7635-45c8-b00f-0a87fe1e0f87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014480203s STEP: Saw pod success Apr 20 15:52:34.947: INFO: Pod "pod-projected-configmaps-6b0925c2-7635-45c8-b00f-0a87fe1e0f87" satisfied condition "success or failure" Apr 20 15:52:34.950: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-6b0925c2-7635-45c8-b00f-0a87fe1e0f87 container projected-configmap-volume-test: STEP: delete the pod Apr 20 15:52:34.976: INFO: Waiting for pod pod-projected-configmaps-6b0925c2-7635-45c8-b00f-0a87fe1e0f87 to disappear Apr 20 15:52:34.999: INFO: Pod pod-projected-configmaps-6b0925c2-7635-45c8-b00f-0a87fe1e0f87 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:52:34.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-715" for this suite. Apr 20 15:52:41.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:52:41.097: INFO: namespace projected-715 deletion completed in 6.09258306s • [SLOW TEST:10.241 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:52:41.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-d26506d3-98c6-430d-85d3-2c6186b9fb6b in namespace container-probe-4897 Apr 20 15:52:45.267: INFO: Started pod busybox-d26506d3-98c6-430d-85d3-2c6186b9fb6b in namespace container-probe-4897 STEP: checking the pod's current state and verifying that restartCount is present Apr 20 15:52:45.270: INFO: Initial restart count of pod busybox-d26506d3-98c6-430d-85d3-2c6186b9fb6b is 0 Apr 20 15:53:35.519: INFO: Restart count of pod container-probe-4897/busybox-d26506d3-98c6-430d-85d3-2c6186b9fb6b is now 1 (50.249306361s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:53:35.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4897" for this suite. Apr 20 15:53:41.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:53:41.758: INFO: namespace container-probe-4897 deletion completed in 6.139529046s • [SLOW TEST:60.661 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:53:41.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 20 15:53:46.333: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d908f708-87d6-4158-8737-0ecb00d55e21" Apr 20 15:53:46.333: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d908f708-87d6-4158-8737-0ecb00d55e21" in namespace "pods-8004" to be "terminated due to deadline exceeded" Apr 20 15:53:46.354: INFO: Pod "pod-update-activedeadlineseconds-d908f708-87d6-4158-8737-0ecb00d55e21": Phase="Running", Reason="", readiness=true. Elapsed: 20.582509ms Apr 20 15:53:48.358: INFO: Pod "pod-update-activedeadlineseconds-d908f708-87d6-4158-8737-0ecb00d55e21": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.024945492s Apr 20 15:53:48.358: INFO: Pod "pod-update-activedeadlineseconds-d908f708-87d6-4158-8737-0ecb00d55e21" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:53:48.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8004" for this suite. Apr 20 15:53:54.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:53:54.517: INFO: namespace pods-8004 deletion completed in 6.154313137s • [SLOW TEST:12.759 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:53:54.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 20 15:54:02.636: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 20 15:54:02.641: INFO: Pod pod-with-poststart-exec-hook still exists Apr 20 15:54:04.641: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 20 15:54:04.670: INFO: Pod pod-with-poststart-exec-hook still exists Apr 20 15:54:06.641: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 20 15:54:06.659: INFO: Pod pod-with-poststart-exec-hook still exists Apr 20 15:54:08.641: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 20 15:54:08.645: INFO: Pod pod-with-poststart-exec-hook still exists Apr 20 15:54:10.641: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 20 15:54:10.648: INFO: Pod pod-with-poststart-exec-hook still exists Apr 20 15:54:12.641: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 20 15:54:12.645: INFO: Pod pod-with-poststart-exec-hook still exists Apr 20 15:54:14.641: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 20 15:54:14.646: INFO: Pod pod-with-poststart-exec-hook still exists Apr 20 15:54:16.642: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 20 15:54:16.645: INFO: Pod pod-with-poststart-exec-hook still exists Apr 20 15:54:18.641: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 20 15:54:18.645: INFO: Pod pod-with-poststart-exec-hook still exists Apr 20 15:54:20.641: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 20 15:54:21.090: INFO: Pod pod-with-poststart-exec-hook still exists Apr 20 15:54:22.641: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 20 15:54:22.646: INFO: Pod pod-with-poststart-exec-hook still exists Apr 20 15:54:24.641: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 20 15:54:24.646: INFO: Pod pod-with-poststart-exec-hook still exists Apr 20 15:54:26.641: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 20 15:54:26.646: INFO: Pod pod-with-poststart-exec-hook still exists Apr 20 15:54:28.641: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 20 15:54:28.645: INFO: Pod pod-with-poststart-exec-hook still exists Apr 20 15:54:30.641: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 20 15:54:30.646: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:54:30.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-753" for this suite. Apr 20 15:54:52.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:54:52.759: INFO: namespace container-lifecycle-hook-753 deletion completed in 22.107686641s • [SLOW TEST:58.241 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:54:52.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 20 15:54:57.899: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:54:57.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6493" for this suite. Apr 20 15:55:03.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:55:04.048: INFO: namespace container-runtime-6493 deletion completed in 6.127035954s • [SLOW TEST:11.289 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:55:04.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 20 15:55:04.092: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Apr 20 15:55:04.738: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 20 15:55:07.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754530904, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754530904, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754530904, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754530904, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 20 15:55:09.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754530904, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754530904, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754530904, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754530904, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 20 15:55:12.079: INFO: Waited 645.526436ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:55:13.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5610" for this suite. Apr 20 15:55:19.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:55:19.706: INFO: namespace aggregator-5610 deletion completed in 6.290861448s • [SLOW TEST:15.657 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:55:19.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2710 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 20 15:55:19.864: INFO: Found 0 stateful pods, waiting for 3 Apr 20 15:55:29.868: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 20 15:55:29.868: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 20 15:55:29.868: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 20 15:55:39.869: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 20 15:55:39.869: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 20 15:55:39.869: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 20 15:55:39.893: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 20 15:55:49.951: INFO: Updating stateful set ss2 Apr 20 15:55:49.977: INFO: Waiting for Pod statefulset-2710/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Apr 20 15:56:00.282: INFO: Found 2 stateful pods, waiting for 3 Apr 20 15:56:10.288: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 20 15:56:10.288: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 20 15:56:10.288: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 20 15:56:10.311: INFO: Updating stateful set ss2 Apr 20 15:56:10.346: INFO: Waiting for Pod statefulset-2710/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 20 15:56:20.369: INFO: Updating stateful set ss2 Apr 20 15:56:20.397: INFO: Waiting for StatefulSet statefulset-2710/ss2 to complete update Apr 20 15:56:20.397: INFO: Waiting for Pod statefulset-2710/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 20 15:56:30.403: INFO: Deleting all statefulset in ns statefulset-2710 Apr 20 15:56:30.405: INFO: Scaling statefulset ss2 to 0 Apr 20 15:56:50.427: INFO: Waiting for statefulset status.replicas updated to 0 Apr 20 15:56:50.431: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:56:50.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2710" for this suite. Apr 20 15:56:56.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:56:56.543: INFO: namespace statefulset-2710 deletion completed in 6.096783596s • [SLOW TEST:96.837 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:56:56.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 20 15:56:56.574: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:57:04.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6394" for this suite. Apr 20 15:57:10.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:57:10.503: INFO: namespace init-container-6394 deletion completed in 6.112849904s • [SLOW TEST:13.959 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:57:10.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 20 15:57:10.592: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 20 15:57:10.599: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:10.604: INFO: Number of nodes with available pods: 0 Apr 20 15:57:10.604: INFO: Node iruya-worker is running more than one daemon pod Apr 20 15:57:11.609: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:11.613: INFO: Number of nodes with available pods: 0 Apr 20 15:57:11.613: INFO: Node iruya-worker is running more than one daemon pod Apr 20 15:57:12.609: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:12.612: INFO: Number of nodes with available pods: 0 Apr 20 15:57:12.612: INFO: Node iruya-worker is running more than one daemon pod Apr 20 15:57:13.663: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:13.667: INFO: Number of nodes with available pods: 0 Apr 20 15:57:13.667: INFO: Node iruya-worker is running more than one daemon pod Apr 20 15:57:14.610: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:14.613: INFO: Number of nodes with available pods: 1 Apr 20 15:57:14.613: INFO: Node iruya-worker2 is running more than one daemon pod Apr 20 15:57:15.609: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:15.612: INFO: Number of nodes with available pods: 2 Apr 20 15:57:15.612: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 20 15:57:15.651: INFO: Wrong image for pod: daemon-set-tmsr8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:15.651: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:15.659: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:16.665: INFO: Wrong image for pod: daemon-set-tmsr8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:16.665: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:16.668: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:17.663: INFO: Wrong image for pod: daemon-set-tmsr8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:17.663: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:17.666: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:18.664: INFO: Wrong image for pod: daemon-set-tmsr8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:18.664: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:18.667: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:19.663: INFO: Wrong image for pod: daemon-set-tmsr8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:19.663: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:19.666: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:20.663: INFO: Wrong image for pod: daemon-set-tmsr8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:20.663: INFO: Pod daemon-set-tmsr8 is not available Apr 20 15:57:20.663: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:20.666: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:21.662: INFO: Wrong image for pod: daemon-set-tmsr8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:21.662: INFO: Pod daemon-set-tmsr8 is not available Apr 20 15:57:21.662: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:21.665: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:22.664: INFO: Wrong image for pod: daemon-set-tmsr8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:22.664: INFO: Pod daemon-set-tmsr8 is not available Apr 20 15:57:22.664: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:22.668: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:23.664: INFO: Wrong image for pod: daemon-set-tmsr8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:23.664: INFO: Pod daemon-set-tmsr8 is not available Apr 20 15:57:23.664: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:23.668: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:24.664: INFO: Wrong image for pod: daemon-set-tmsr8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:24.664: INFO: Pod daemon-set-tmsr8 is not available Apr 20 15:57:24.664: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:24.668: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:25.664: INFO: Wrong image for pod: daemon-set-tmsr8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:25.664: INFO: Pod daemon-set-tmsr8 is not available Apr 20 15:57:25.664: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:25.667: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:26.663: INFO: Wrong image for pod: daemon-set-tmsr8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:26.664: INFO: Pod daemon-set-tmsr8 is not available Apr 20 15:57:26.664: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:26.668: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:27.663: INFO: Wrong image for pod: daemon-set-tmsr8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:27.663: INFO: Pod daemon-set-tmsr8 is not available Apr 20 15:57:27.663: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:27.666: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:28.663: INFO: Wrong image for pod: daemon-set-tmsr8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:28.663: INFO: Pod daemon-set-tmsr8 is not available Apr 20 15:57:28.663: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:28.667: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:29.664: INFO: Pod daemon-set-67bzb is not available Apr 20 15:57:29.664: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:29.669: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:30.663: INFO: Pod daemon-set-67bzb is not available Apr 20 15:57:30.663: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:30.667: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:32.213: INFO: Pod daemon-set-67bzb is not available Apr 20 15:57:32.214: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:32.218: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:32.664: INFO: Pod daemon-set-67bzb is not available Apr 20 15:57:32.664: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:32.668: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:33.728: INFO: Pod daemon-set-67bzb is not available Apr 20 15:57:33.728: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:33.732: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:34.663: INFO: Pod daemon-set-67bzb is not available Apr 20 15:57:34.663: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:34.666: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:35.704: INFO: Pod daemon-set-67bzb is not available Apr 20 15:57:35.704: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:35.708: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:36.806: INFO: Pod daemon-set-67bzb is not available Apr 20 15:57:36.806: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:36.809: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:37.725: INFO: Pod daemon-set-67bzb is not available Apr 20 15:57:37.725: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:37.729: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:38.670: INFO: Pod daemon-set-67bzb is not available Apr 20 15:57:38.670: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:38.673: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:39.663: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:39.807: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:40.663: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:40.667: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:42.101: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:42.137: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:42.834: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:43.136: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:43.718: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:43.718: INFO: Pod daemon-set-xp2hd is not available Apr 20 15:57:43.765: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:44.700: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:44.700: INFO: Pod daemon-set-xp2hd is not available Apr 20 15:57:44.704: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:45.664: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:45.664: INFO: Pod daemon-set-xp2hd is not available Apr 20 15:57:45.746: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:46.663: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:46.663: INFO: Pod daemon-set-xp2hd is not available Apr 20 15:57:46.666: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:47.665: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:47.665: INFO: Pod daemon-set-xp2hd is not available Apr 20 15:57:47.669: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:48.664: INFO: Wrong image for pod: daemon-set-xp2hd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 20 15:57:48.664: INFO: Pod daemon-set-xp2hd is not available Apr 20 15:57:48.668: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:49.663: INFO: Pod daemon-set-mn58n is not available Apr 20 15:57:49.667: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 20 15:57:49.671: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:49.674: INFO: Number of nodes with available pods: 1 Apr 20 15:57:49.674: INFO: Node iruya-worker2 is running more than one daemon pod Apr 20 15:57:50.679: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:50.683: INFO: Number of nodes with available pods: 1 Apr 20 15:57:50.683: INFO: Node iruya-worker2 is running more than one daemon pod Apr 20 15:57:51.679: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:51.682: INFO: Number of nodes with available pods: 1 Apr 20 15:57:51.682: INFO: Node iruya-worker2 is running more than one daemon pod Apr 20 15:57:52.679: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:52.683: INFO: Number of nodes with available pods: 1 Apr 20 15:57:52.683: INFO: Node iruya-worker2 is running more than one daemon pod Apr 20 15:57:53.680: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:53.683: INFO: Number of nodes with available pods: 1 Apr 20 15:57:53.683: INFO: Node iruya-worker2 is running more than one daemon pod Apr 20 15:57:54.679: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 15:57:54.684: INFO: Number of nodes with available pods: 2 Apr 20 15:57:54.684: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8214, will wait for the garbage collector to delete the pods Apr 20 15:57:54.761: INFO: Deleting DaemonSet.extensions daemon-set took: 7.440903ms Apr 20 15:57:55.161: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.336087ms Apr 20 15:58:09.265: INFO: Number of nodes with available pods: 0 Apr 20 15:58:09.265: INFO: Number of running nodes: 0, number of available pods: 0 Apr 20 15:58:09.267: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8214/daemonsets","resourceVersion":"1289835"},"items":null} Apr 20 15:58:09.269: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8214/pods","resourceVersion":"1289835"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:58:09.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8214" for this suite. Apr 20 15:58:15.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:58:15.401: INFO: namespace daemonsets-8214 deletion completed in 6.099351102s • [SLOW TEST:64.897 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:58:15.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 20 15:58:15.478: INFO: Waiting up to 5m0s for pod "pod-9d50c21b-4d37-4963-b910-963dc8d920f9" in namespace "emptydir-5911" to be "success or failure" Apr 20 15:58:15.484: INFO: Pod "pod-9d50c21b-4d37-4963-b910-963dc8d920f9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.888753ms Apr 20 15:58:17.488: INFO: Pod "pod-9d50c21b-4d37-4963-b910-963dc8d920f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009752201s Apr 20 15:58:19.492: INFO: Pod "pod-9d50c21b-4d37-4963-b910-963dc8d920f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013812894s STEP: Saw pod success Apr 20 15:58:19.492: INFO: Pod "pod-9d50c21b-4d37-4963-b910-963dc8d920f9" satisfied condition "success or failure" Apr 20 15:58:19.495: INFO: Trying to get logs from node iruya-worker pod pod-9d50c21b-4d37-4963-b910-963dc8d920f9 container test-container: STEP: delete the pod Apr 20 15:58:19.682: INFO: Waiting for pod pod-9d50c21b-4d37-4963-b910-963dc8d920f9 to disappear Apr 20 15:58:19.705: INFO: Pod pod-9d50c21b-4d37-4963-b910-963dc8d920f9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:58:19.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5911" for this suite. Apr 20 15:58:25.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:58:25.843: INFO: namespace emptydir-5911 deletion completed in 6.133997728s • [SLOW TEST:10.442 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:58:25.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:58:25.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6420" for this suite. Apr 20 15:58:31.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:58:32.079: INFO: namespace services-6420 deletion completed in 6.142962405s [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.236 seconds] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:58:32.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 20 15:58:56.247: INFO: Container started at 2021-04-20 15:58:35 +0000 UTC, pod became ready at 2021-04-20 15:58:55 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:58:56.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8139" for this suite. Apr 20 15:59:18.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:59:18.364: INFO: namespace container-probe-8139 deletion completed in 22.111716411s • [SLOW TEST:46.284 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:59:18.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 15:59:44.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1276" for this suite. Apr 20 15:59:50.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:59:50.683: INFO: namespace namespaces-1276 deletion completed in 6.133098557s STEP: Destroying namespace "nsdeletetest-5004" for this suite. Apr 20 15:59:50.686: INFO: Namespace nsdeletetest-5004 was already deleted STEP: Destroying namespace "nsdeletetest-4539" for this suite. Apr 20 15:59:56.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 15:59:56.793: INFO: namespace nsdeletetest-4539 deletion completed in 6.107087493s • [SLOW TEST:38.428 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 15:59:56.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 20 16:00:00.962: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:00:01.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3785" for this suite. Apr 20 16:00:07.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:00:07.254: INFO: namespace container-runtime-3785 deletion completed in 6.145078086s • [SLOW TEST:10.461 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:00:07.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-babacc38-11c5-484b-8ec1-ad00f8e340a9 STEP: Creating a pod to test consume secrets Apr 20 16:00:07.386: INFO: Waiting up to 5m0s for pod "pod-secrets-07972d29-baa8-4e21-8592-a83f9f47c8f9" in namespace "secrets-4157" to be "success or failure" Apr 20 16:00:07.390: INFO: Pod "pod-secrets-07972d29-baa8-4e21-8592-a83f9f47c8f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287853ms Apr 20 16:00:09.394: INFO: Pod "pod-secrets-07972d29-baa8-4e21-8592-a83f9f47c8f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008546971s Apr 20 16:00:11.398: INFO: Pod "pod-secrets-07972d29-baa8-4e21-8592-a83f9f47c8f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012806663s STEP: Saw pod success Apr 20 16:00:11.399: INFO: Pod "pod-secrets-07972d29-baa8-4e21-8592-a83f9f47c8f9" satisfied condition "success or failure" Apr 20 16:00:11.401: INFO: Trying to get logs from node iruya-worker pod pod-secrets-07972d29-baa8-4e21-8592-a83f9f47c8f9 container secret-volume-test: STEP: delete the pod Apr 20 16:00:11.437: INFO: Waiting for pod pod-secrets-07972d29-baa8-4e21-8592-a83f9f47c8f9 to disappear Apr 20 16:00:11.461: INFO: Pod pod-secrets-07972d29-baa8-4e21-8592-a83f9f47c8f9 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:00:11.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4157" for this suite. Apr 20 16:00:17.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:00:17.568: INFO: namespace secrets-4157 deletion completed in 6.103528497s STEP: Destroying namespace "secret-namespace-9935" for this suite. Apr 20 16:00:23.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:00:23.673: INFO: namespace secret-namespace-9935 deletion completed in 6.104850358s • [SLOW TEST:16.419 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:00:23.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 20 16:00:28.309: INFO: Successfully updated pod "labelsupdatec5075ac4-312f-4ac8-bb93-3632b24e25b8" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:00:32.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4019" for this suite. Apr 20 16:00:54.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:00:54.446: INFO: namespace downward-api-4019 deletion completed in 22.110477501s • [SLOW TEST:30.773 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:00:54.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-2dd6bea9-6f9f-4e98-997a-83112d940eb8 STEP: Creating a pod to test consume configMaps Apr 20 16:00:54.565: INFO: Waiting up to 5m0s for pod "pod-configmaps-d30a1545-a5d7-4f17-a1d8-a232968d2274" in namespace "configmap-5602" to be "success or failure" Apr 20 16:00:54.570: INFO: Pod "pod-configmaps-d30a1545-a5d7-4f17-a1d8-a232968d2274": Phase="Pending", Reason="", readiness=false. Elapsed: 5.307805ms Apr 20 16:00:56.573: INFO: Pod "pod-configmaps-d30a1545-a5d7-4f17-a1d8-a232968d2274": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00833123s Apr 20 16:00:58.584: INFO: Pod "pod-configmaps-d30a1545-a5d7-4f17-a1d8-a232968d2274": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019467449s STEP: Saw pod success Apr 20 16:00:58.584: INFO: Pod "pod-configmaps-d30a1545-a5d7-4f17-a1d8-a232968d2274" satisfied condition "success or failure" Apr 20 16:00:58.586: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-d30a1545-a5d7-4f17-a1d8-a232968d2274 container configmap-volume-test: STEP: delete the pod Apr 20 16:00:58.601: INFO: Waiting for pod pod-configmaps-d30a1545-a5d7-4f17-a1d8-a232968d2274 to disappear Apr 20 16:00:58.606: INFO: Pod pod-configmaps-d30a1545-a5d7-4f17-a1d8-a232968d2274 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:00:58.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5602" for this suite. Apr 20 16:01:04.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:01:04.729: INFO: namespace configmap-5602 deletion completed in 6.119222777s • [SLOW TEST:10.283 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:01:04.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 20 16:01:04.817: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:01:04.822: INFO: Number of nodes with available pods: 0 Apr 20 16:01:04.822: INFO: Node iruya-worker is running more than one daemon pod Apr 20 16:01:05.827: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:01:05.830: INFO: Number of nodes with available pods: 0 Apr 20 16:01:05.830: INFO: Node iruya-worker is running more than one daemon pod Apr 20 16:01:06.893: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:01:06.896: INFO: Number of nodes with available pods: 0 Apr 20 16:01:06.896: INFO: Node iruya-worker is running more than one daemon pod Apr 20 16:01:07.827: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:01:07.829: INFO: Number of nodes with available pods: 0 Apr 20 16:01:07.829: INFO: Node iruya-worker is running more than one daemon pod Apr 20 16:01:08.839: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:01:08.863: INFO: Number of nodes with available pods: 2 Apr 20 16:01:08.863: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 20 16:01:08.880: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:01:08.884: INFO: Number of nodes with available pods: 1 Apr 20 16:01:08.884: INFO: Node iruya-worker is running more than one daemon pod Apr 20 16:01:09.889: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:01:09.893: INFO: Number of nodes with available pods: 1 Apr 20 16:01:09.893: INFO: Node iruya-worker is running more than one daemon pod Apr 20 16:01:10.954: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:01:10.957: INFO: Number of nodes with available pods: 1 Apr 20 16:01:10.957: INFO: Node iruya-worker is running more than one daemon pod Apr 20 16:01:11.889: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:01:11.892: INFO: Number of nodes with available pods: 1 Apr 20 16:01:11.892: INFO: Node iruya-worker is running more than one daemon pod Apr 20 16:01:12.890: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:01:12.893: INFO: Number of nodes with available pods: 1 Apr 20 16:01:12.893: INFO: Node iruya-worker is running more than one daemon pod Apr 20 16:01:13.888: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:01:13.891: INFO: Number of nodes with available pods: 1 Apr 20 16:01:13.891: INFO: Node iruya-worker is running more than one daemon pod Apr 20 16:01:14.888: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:01:14.890: INFO: Number of nodes with available pods: 1 Apr 20 16:01:14.890: INFO: Node iruya-worker is running more than one daemon pod Apr 20 16:01:15.888: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:01:15.890: INFO: Number of nodes with available pods: 1 Apr 20 16:01:15.890: INFO: Node iruya-worker is running more than one daemon pod Apr 20 16:01:16.889: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:01:16.892: INFO: Number of nodes with available pods: 2 Apr 20 16:01:16.892: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1813, will wait for the garbage collector to delete the pods Apr 20 16:01:16.953: INFO: Deleting DaemonSet.extensions daemon-set took: 6.47603ms Apr 20 16:01:17.253: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.255652ms Apr 20 16:01:29.256: INFO: Number of nodes with available pods: 0 Apr 20 16:01:29.256: INFO: Number of running nodes: 0, number of available pods: 0 Apr 20 16:01:29.259: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1813/daemonsets","resourceVersion":"1290520"},"items":null} Apr 20 16:01:29.261: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1813/pods","resourceVersion":"1290520"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:01:29.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1813" for this suite. Apr 20 16:01:35.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:01:35.393: INFO: namespace daemonsets-1813 deletion completed in 6.118623458s • [SLOW TEST:30.664 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:01:35.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 20 16:01:35.461: INFO: Waiting up to 5m0s for pod "downwardapi-volume-37a5e378-3af8-4765-a99f-a67193fd44cb" in namespace "downward-api-4639" to be "success or failure" Apr 20 16:01:35.464: INFO: Pod "downwardapi-volume-37a5e378-3af8-4765-a99f-a67193fd44cb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.176569ms Apr 20 16:01:37.468: INFO: Pod "downwardapi-volume-37a5e378-3af8-4765-a99f-a67193fd44cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007096941s Apr 20 16:01:39.475: INFO: Pod "downwardapi-volume-37a5e378-3af8-4765-a99f-a67193fd44cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014185814s STEP: Saw pod success Apr 20 16:01:39.475: INFO: Pod "downwardapi-volume-37a5e378-3af8-4765-a99f-a67193fd44cb" satisfied condition "success or failure" Apr 20 16:01:39.477: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-37a5e378-3af8-4765-a99f-a67193fd44cb container client-container: STEP: delete the pod Apr 20 16:01:39.507: INFO: Waiting for pod downwardapi-volume-37a5e378-3af8-4765-a99f-a67193fd44cb to disappear Apr 20 16:01:39.530: INFO: Pod downwardapi-volume-37a5e378-3af8-4765-a99f-a67193fd44cb no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:01:39.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4639" for this suite. Apr 20 16:01:45.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:01:45.738: INFO: namespace downward-api-4639 deletion completed in 6.204931652s • [SLOW TEST:10.345 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:01:45.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-0dc5c656-0ecd-46f5-a77e-becc95d17f07 STEP: Creating a pod to test consume secrets Apr 20 16:01:45.937: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6c1392c8-f52a-4d1c-a5cb-0bbd99982e9b" in namespace "projected-2220" to be "success or failure" Apr 20 16:01:46.026: INFO: Pod "pod-projected-secrets-6c1392c8-f52a-4d1c-a5cb-0bbd99982e9b": Phase="Pending", Reason="", readiness=false. Elapsed: 88.968921ms Apr 20 16:01:48.030: INFO: Pod "pod-projected-secrets-6c1392c8-f52a-4d1c-a5cb-0bbd99982e9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092766552s Apr 20 16:01:50.034: INFO: Pod "pod-projected-secrets-6c1392c8-f52a-4d1c-a5cb-0bbd99982e9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097545276s STEP: Saw pod success Apr 20 16:01:50.035: INFO: Pod "pod-projected-secrets-6c1392c8-f52a-4d1c-a5cb-0bbd99982e9b" satisfied condition "success or failure" Apr 20 16:01:50.038: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-6c1392c8-f52a-4d1c-a5cb-0bbd99982e9b container projected-secret-volume-test: STEP: delete the pod Apr 20 16:01:50.076: INFO: Waiting for pod pod-projected-secrets-6c1392c8-f52a-4d1c-a5cb-0bbd99982e9b to disappear Apr 20 16:01:50.090: INFO: Pod pod-projected-secrets-6c1392c8-f52a-4d1c-a5cb-0bbd99982e9b no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:01:50.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2220" for this suite. Apr 20 16:01:56.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:01:56.191: INFO: namespace projected-2220 deletion completed in 6.096595167s • [SLOW TEST:10.452 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:01:56.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7999.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7999.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7999.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7999.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 20 16:02:02.304: INFO: DNS probes using dns-test-599c445b-e900-4d2f-a44b-a5e4a3649897 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7999.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7999.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7999.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7999.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 20 16:02:08.439: INFO: File wheezy_udp@dns-test-service-3.dns-7999.svc.cluster.local from pod dns-7999/dns-test-0cbce169-f2cb-4088-8c32-376334d49c8b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 20 16:02:08.443: INFO: File jessie_udp@dns-test-service-3.dns-7999.svc.cluster.local from pod dns-7999/dns-test-0cbce169-f2cb-4088-8c32-376334d49c8b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 20 16:02:08.443: INFO: Lookups using dns-7999/dns-test-0cbce169-f2cb-4088-8c32-376334d49c8b failed for: [wheezy_udp@dns-test-service-3.dns-7999.svc.cluster.local jessie_udp@dns-test-service-3.dns-7999.svc.cluster.local] Apr 20 16:02:13.448: INFO: File wheezy_udp@dns-test-service-3.dns-7999.svc.cluster.local from pod dns-7999/dns-test-0cbce169-f2cb-4088-8c32-376334d49c8b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 20 16:02:13.451: INFO: File jessie_udp@dns-test-service-3.dns-7999.svc.cluster.local from pod dns-7999/dns-test-0cbce169-f2cb-4088-8c32-376334d49c8b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 20 16:02:13.451: INFO: Lookups using dns-7999/dns-test-0cbce169-f2cb-4088-8c32-376334d49c8b failed for: [wheezy_udp@dns-test-service-3.dns-7999.svc.cluster.local jessie_udp@dns-test-service-3.dns-7999.svc.cluster.local] Apr 20 16:02:18.448: INFO: File wheezy_udp@dns-test-service-3.dns-7999.svc.cluster.local from pod dns-7999/dns-test-0cbce169-f2cb-4088-8c32-376334d49c8b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 20 16:02:18.452: INFO: File jessie_udp@dns-test-service-3.dns-7999.svc.cluster.local from pod dns-7999/dns-test-0cbce169-f2cb-4088-8c32-376334d49c8b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 20 16:02:18.452: INFO: Lookups using dns-7999/dns-test-0cbce169-f2cb-4088-8c32-376334d49c8b failed for: [wheezy_udp@dns-test-service-3.dns-7999.svc.cluster.local jessie_udp@dns-test-service-3.dns-7999.svc.cluster.local] Apr 20 16:02:23.448: INFO: File wheezy_udp@dns-test-service-3.dns-7999.svc.cluster.local from pod dns-7999/dns-test-0cbce169-f2cb-4088-8c32-376334d49c8b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 20 16:02:23.451: INFO: File jessie_udp@dns-test-service-3.dns-7999.svc.cluster.local from pod dns-7999/dns-test-0cbce169-f2cb-4088-8c32-376334d49c8b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 20 16:02:23.451: INFO: Lookups using dns-7999/dns-test-0cbce169-f2cb-4088-8c32-376334d49c8b failed for: [wheezy_udp@dns-test-service-3.dns-7999.svc.cluster.local jessie_udp@dns-test-service-3.dns-7999.svc.cluster.local] Apr 20 16:02:28.447: INFO: File wheezy_udp@dns-test-service-3.dns-7999.svc.cluster.local from pod dns-7999/dns-test-0cbce169-f2cb-4088-8c32-376334d49c8b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 20 16:02:28.450: INFO: File jessie_udp@dns-test-service-3.dns-7999.svc.cluster.local from pod dns-7999/dns-test-0cbce169-f2cb-4088-8c32-376334d49c8b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 20 16:02:28.450: INFO: Lookups using dns-7999/dns-test-0cbce169-f2cb-4088-8c32-376334d49c8b failed for: [wheezy_udp@dns-test-service-3.dns-7999.svc.cluster.local jessie_udp@dns-test-service-3.dns-7999.svc.cluster.local] Apr 20 16:02:33.450: INFO: DNS probes using dns-test-0cbce169-f2cb-4088-8c32-376334d49c8b succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7999.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7999.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7999.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7999.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 20 16:02:40.305: INFO: DNS probes using dns-test-4b82651f-c4f9-406d-abe8-415a608976a3 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:02:40.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7999" for this suite. Apr 20 16:02:46.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:02:46.562: INFO: namespace dns-7999 deletion completed in 6.135508352s • [SLOW TEST:50.371 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:02:46.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 20 16:02:46.640: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2138,SelfLink:/api/v1/namespaces/watch-2138/configmaps/e2e-watch-test-configmap-a,UID:0fae49e5-93e8-4122-971b-24a114bc220a,ResourceVersion:1290861,Generation:0,CreationTimestamp:2021-04-20 16:02:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 20 16:02:46.641: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2138,SelfLink:/api/v1/namespaces/watch-2138/configmaps/e2e-watch-test-configmap-a,UID:0fae49e5-93e8-4122-971b-24a114bc220a,ResourceVersion:1290861,Generation:0,CreationTimestamp:2021-04-20 16:02:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 20 16:02:56.649: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2138,SelfLink:/api/v1/namespaces/watch-2138/configmaps/e2e-watch-test-configmap-a,UID:0fae49e5-93e8-4122-971b-24a114bc220a,ResourceVersion:1290881,Generation:0,CreationTimestamp:2021-04-20 16:02:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 20 16:02:56.649: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2138,SelfLink:/api/v1/namespaces/watch-2138/configmaps/e2e-watch-test-configmap-a,UID:0fae49e5-93e8-4122-971b-24a114bc220a,ResourceVersion:1290881,Generation:0,CreationTimestamp:2021-04-20 16:02:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 20 16:03:06.656: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2138,SelfLink:/api/v1/namespaces/watch-2138/configmaps/e2e-watch-test-configmap-a,UID:0fae49e5-93e8-4122-971b-24a114bc220a,ResourceVersion:1290901,Generation:0,CreationTimestamp:2021-04-20 16:02:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 20 16:03:06.656: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2138,SelfLink:/api/v1/namespaces/watch-2138/configmaps/e2e-watch-test-configmap-a,UID:0fae49e5-93e8-4122-971b-24a114bc220a,ResourceVersion:1290901,Generation:0,CreationTimestamp:2021-04-20 16:02:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 20 16:03:16.663: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2138,SelfLink:/api/v1/namespaces/watch-2138/configmaps/e2e-watch-test-configmap-a,UID:0fae49e5-93e8-4122-971b-24a114bc220a,ResourceVersion:1290923,Generation:0,CreationTimestamp:2021-04-20 16:02:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 20 16:03:16.664: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2138,SelfLink:/api/v1/namespaces/watch-2138/configmaps/e2e-watch-test-configmap-a,UID:0fae49e5-93e8-4122-971b-24a114bc220a,ResourceVersion:1290923,Generation:0,CreationTimestamp:2021-04-20 16:02:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 20 16:03:26.671: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2138,SelfLink:/api/v1/namespaces/watch-2138/configmaps/e2e-watch-test-configmap-b,UID:fe41cbbe-4d02-4981-8a8c-1b08387e7ee0,ResourceVersion:1290943,Generation:0,CreationTimestamp:2021-04-20 16:03:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 20 16:03:26.671: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2138,SelfLink:/api/v1/namespaces/watch-2138/configmaps/e2e-watch-test-configmap-b,UID:fe41cbbe-4d02-4981-8a8c-1b08387e7ee0,ResourceVersion:1290943,Generation:0,CreationTimestamp:2021-04-20 16:03:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 20 16:03:36.678: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2138,SelfLink:/api/v1/namespaces/watch-2138/configmaps/e2e-watch-test-configmap-b,UID:fe41cbbe-4d02-4981-8a8c-1b08387e7ee0,ResourceVersion:1290963,Generation:0,CreationTimestamp:2021-04-20 16:03:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 20 16:03:36.678: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2138,SelfLink:/api/v1/namespaces/watch-2138/configmaps/e2e-watch-test-configmap-b,UID:fe41cbbe-4d02-4981-8a8c-1b08387e7ee0,ResourceVersion:1290963,Generation:0,CreationTimestamp:2021-04-20 16:03:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:03:46.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2138" for this suite. Apr 20 16:03:52.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:03:52.798: INFO: namespace watch-2138 deletion completed in 6.111967718s • [SLOW TEST:66.235 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:03:52.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-e1d9218a-258f-42ad-84ad-8f8ec043ebf8 STEP: Creating a pod to test consume secrets Apr 20 16:03:52.864: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-381bd768-0c75-40c5-be68-a3faa376153d" in namespace "projected-7899" to be "success or failure" Apr 20 16:03:52.868: INFO: Pod "pod-projected-secrets-381bd768-0c75-40c5-be68-a3faa376153d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.589206ms Apr 20 16:03:54.872: INFO: Pod "pod-projected-secrets-381bd768-0c75-40c5-be68-a3faa376153d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007788998s Apr 20 16:03:56.876: INFO: Pod "pod-projected-secrets-381bd768-0c75-40c5-be68-a3faa376153d": Phase="Running", Reason="", readiness=true. Elapsed: 4.011987091s Apr 20 16:03:58.880: INFO: Pod "pod-projected-secrets-381bd768-0c75-40c5-be68-a3faa376153d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016040537s STEP: Saw pod success Apr 20 16:03:58.880: INFO: Pod "pod-projected-secrets-381bd768-0c75-40c5-be68-a3faa376153d" satisfied condition "success or failure" Apr 20 16:03:58.883: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-381bd768-0c75-40c5-be68-a3faa376153d container secret-volume-test: STEP: delete the pod Apr 20 16:03:58.905: INFO: Waiting for pod pod-projected-secrets-381bd768-0c75-40c5-be68-a3faa376153d to disappear Apr 20 16:03:58.910: INFO: Pod pod-projected-secrets-381bd768-0c75-40c5-be68-a3faa376153d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:03:58.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7899" for this suite. Apr 20 16:04:04.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:04:05.034: INFO: namespace projected-7899 deletion completed in 6.120726942s • [SLOW TEST:12.236 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:04:05.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 20 16:04:05.092: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66ba73e5-e5d9-4d38-a21f-9c26e66887bf" in namespace "projected-4786" to be "success or failure" Apr 20 16:04:05.095: INFO: Pod "downwardapi-volume-66ba73e5-e5d9-4d38-a21f-9c26e66887bf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.73321ms Apr 20 16:04:07.099: INFO: Pod "downwardapi-volume-66ba73e5-e5d9-4d38-a21f-9c26e66887bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007207564s Apr 20 16:04:09.103: INFO: Pod "downwardapi-volume-66ba73e5-e5d9-4d38-a21f-9c26e66887bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011165337s STEP: Saw pod success Apr 20 16:04:09.103: INFO: Pod "downwardapi-volume-66ba73e5-e5d9-4d38-a21f-9c26e66887bf" satisfied condition "success or failure" Apr 20 16:04:09.105: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-66ba73e5-e5d9-4d38-a21f-9c26e66887bf container client-container: STEP: delete the pod Apr 20 16:04:09.127: INFO: Waiting for pod downwardapi-volume-66ba73e5-e5d9-4d38-a21f-9c26e66887bf to disappear Apr 20 16:04:09.131: INFO: Pod downwardapi-volume-66ba73e5-e5d9-4d38-a21f-9c26e66887bf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:04:09.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4786" for this suite. Apr 20 16:04:15.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:04:15.269: INFO: namespace projected-4786 deletion completed in 6.13428984s • [SLOW TEST:10.235 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:04:15.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 20 16:04:15.390: INFO: Waiting up to 5m0s for pod "downward-api-e4d7ecb2-1e5c-4c03-8a36-c7716d2a2ea3" in namespace "downward-api-4832" to be "success or failure" Apr 20 16:04:15.400: INFO: Pod "downward-api-e4d7ecb2-1e5c-4c03-8a36-c7716d2a2ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.716169ms Apr 20 16:04:17.403: INFO: Pod "downward-api-e4d7ecb2-1e5c-4c03-8a36-c7716d2a2ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013024093s Apr 20 16:04:19.408: INFO: Pod "downward-api-e4d7ecb2-1e5c-4c03-8a36-c7716d2a2ea3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017474647s STEP: Saw pod success Apr 20 16:04:19.408: INFO: Pod "downward-api-e4d7ecb2-1e5c-4c03-8a36-c7716d2a2ea3" satisfied condition "success or failure" Apr 20 16:04:19.411: INFO: Trying to get logs from node iruya-worker pod downward-api-e4d7ecb2-1e5c-4c03-8a36-c7716d2a2ea3 container dapi-container: STEP: delete the pod Apr 20 16:04:19.465: INFO: Waiting for pod downward-api-e4d7ecb2-1e5c-4c03-8a36-c7716d2a2ea3 to disappear Apr 20 16:04:19.478: INFO: Pod downward-api-e4d7ecb2-1e5c-4c03-8a36-c7716d2a2ea3 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:04:19.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4832" for this suite. Apr 20 16:04:25.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:04:25.590: INFO: namespace downward-api-4832 deletion completed in 6.108356764s • [SLOW TEST:10.320 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:04:25.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Apr 20 16:04:25.641: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix634985724/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:04:25.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1352" for this suite. Apr 20 16:04:31.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:04:31.807: INFO: namespace kubectl-1352 deletion completed in 6.099602559s • [SLOW TEST:6.217 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:04:31.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:04:37.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4134" for this suite. Apr 20 16:04:43.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:04:43.608: INFO: namespace watch-4134 deletion completed in 6.19329802s • [SLOW TEST:11.800 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:04:43.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-e274001d-128b-4eb2-9bd1-d7ded64037cf STEP: Creating a pod to test consume configMaps Apr 20 16:04:43.694: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-219d1e86-981e-4af7-acae-45242d976057" in namespace "projected-3086" to be "success or failure" Apr 20 16:04:43.697: INFO: Pod "pod-projected-configmaps-219d1e86-981e-4af7-acae-45242d976057": Phase="Pending", Reason="", readiness=false. Elapsed: 2.75152ms Apr 20 16:04:45.700: INFO: Pod "pod-projected-configmaps-219d1e86-981e-4af7-acae-45242d976057": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006047766s Apr 20 16:04:47.704: INFO: Pod "pod-projected-configmaps-219d1e86-981e-4af7-acae-45242d976057": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010621534s STEP: Saw pod success Apr 20 16:04:47.704: INFO: Pod "pod-projected-configmaps-219d1e86-981e-4af7-acae-45242d976057" satisfied condition "success or failure" Apr 20 16:04:47.707: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-219d1e86-981e-4af7-acae-45242d976057 container projected-configmap-volume-test: STEP: delete the pod Apr 20 16:04:47.726: INFO: Waiting for pod pod-projected-configmaps-219d1e86-981e-4af7-acae-45242d976057 to disappear Apr 20 16:04:47.742: INFO: Pod pod-projected-configmaps-219d1e86-981e-4af7-acae-45242d976057 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:04:47.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3086" for this suite. Apr 20 16:04:53.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:04:53.872: INFO: namespace projected-3086 deletion completed in 6.126332506s • [SLOW TEST:10.264 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:04:53.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 20 16:04:54.002: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99be1d99-e41b-47c7-9b20-1644aef89739" in namespace "downward-api-2702" to be "success or failure" Apr 20 16:04:54.018: INFO: Pod "downwardapi-volume-99be1d99-e41b-47c7-9b20-1644aef89739": Phase="Pending", Reason="", readiness=false. Elapsed: 15.905487ms Apr 20 16:04:56.070: INFO: Pod "downwardapi-volume-99be1d99-e41b-47c7-9b20-1644aef89739": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068035771s Apr 20 16:04:58.077: INFO: Pod "downwardapi-volume-99be1d99-e41b-47c7-9b20-1644aef89739": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074661388s STEP: Saw pod success Apr 20 16:04:58.077: INFO: Pod "downwardapi-volume-99be1d99-e41b-47c7-9b20-1644aef89739" satisfied condition "success or failure" Apr 20 16:04:58.079: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-99be1d99-e41b-47c7-9b20-1644aef89739 container client-container: STEP: delete the pod Apr 20 16:04:58.097: INFO: Waiting for pod downwardapi-volume-99be1d99-e41b-47c7-9b20-1644aef89739 to disappear Apr 20 16:04:58.114: INFO: Pod downwardapi-volume-99be1d99-e41b-47c7-9b20-1644aef89739 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:04:58.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2702" for this suite. Apr 20 16:05:04.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:05:04.250: INFO: namespace downward-api-2702 deletion completed in 6.132440559s • [SLOW TEST:10.378 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:05:04.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-b93f9f73-105e-4902-904b-f042148af307 STEP: Creating a pod to test consume secrets Apr 20 16:05:04.363: INFO: Waiting up to 5m0s for pod "pod-secrets-b46f209f-f2af-4596-a444-c0090522b77f" in namespace "secrets-1594" to be "success or failure" Apr 20 16:05:04.378: INFO: Pod "pod-secrets-b46f209f-f2af-4596-a444-c0090522b77f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.035954ms Apr 20 16:05:06.436: INFO: Pod "pod-secrets-b46f209f-f2af-4596-a444-c0090522b77f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072877414s Apr 20 16:05:08.440: INFO: Pod "pod-secrets-b46f209f-f2af-4596-a444-c0090522b77f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076858219s STEP: Saw pod success Apr 20 16:05:08.440: INFO: Pod "pod-secrets-b46f209f-f2af-4596-a444-c0090522b77f" satisfied condition "success or failure" Apr 20 16:05:08.443: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-b46f209f-f2af-4596-a444-c0090522b77f container secret-env-test: STEP: delete the pod Apr 20 16:05:08.457: INFO: Waiting for pod pod-secrets-b46f209f-f2af-4596-a444-c0090522b77f to disappear Apr 20 16:05:08.485: INFO: Pod pod-secrets-b46f209f-f2af-4596-a444-c0090522b77f no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:05:08.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1594" for this suite. Apr 20 16:05:14.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:05:14.593: INFO: namespace secrets-1594 deletion completed in 6.104849304s • [SLOW TEST:10.343 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:05:14.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-2139 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2139 to expose endpoints map[] Apr 20 16:05:14.702: INFO: Get endpoints failed (10.591888ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 20 16:05:15.705: INFO: successfully validated that service endpoint-test2 in namespace services-2139 exposes endpoints map[] (1.014057823s elapsed) STEP: Creating pod pod1 in namespace services-2139 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2139 to expose endpoints map[pod1:[80]] Apr 20 16:05:19.764: INFO: successfully validated that service endpoint-test2 in namespace services-2139 exposes endpoints map[pod1:[80]] (4.051604898s elapsed) STEP: Creating pod pod2 in namespace services-2139 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2139 to expose endpoints map[pod1:[80] pod2:[80]] Apr 20 16:05:23.870: INFO: successfully validated that service endpoint-test2 in namespace services-2139 exposes endpoints map[pod1:[80] pod2:[80]] (4.103578099s elapsed) STEP: Deleting pod pod1 in namespace services-2139 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2139 to expose endpoints map[pod2:[80]] Apr 20 16:05:24.919: INFO: successfully validated that service endpoint-test2 in namespace services-2139 exposes endpoints map[pod2:[80]] (1.04471896s elapsed) STEP: Deleting pod pod2 in namespace services-2139 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2139 to expose endpoints map[] Apr 20 16:05:25.932: INFO: successfully validated that service endpoint-test2 in namespace services-2139 exposes endpoints map[] (1.008568539s elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:05:26.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2139" for this suite. Apr 20 16:05:32.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:05:32.325: INFO: namespace services-2139 deletion completed in 6.126752853s [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:17.731 seconds] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:05:32.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 20 16:05:32.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9308' Apr 20 16:05:35.536: INFO: stderr: "" Apr 20 16:05:35.536: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 20 16:05:35.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9308' Apr 20 16:05:35.673: INFO: stderr: "" Apr 20 16:05:35.673: INFO: stdout: "update-demo-nautilus-j2cz8 update-demo-nautilus-prnwt " Apr 20 16:05:35.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j2cz8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9308' Apr 20 16:05:35.766: INFO: stderr: "" Apr 20 16:05:35.766: INFO: stdout: "" Apr 20 16:05:35.766: INFO: update-demo-nautilus-j2cz8 is created but not running Apr 20 16:05:40.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9308' Apr 20 16:05:40.865: INFO: stderr: "" Apr 20 16:05:40.865: INFO: stdout: "update-demo-nautilus-j2cz8 update-demo-nautilus-prnwt " Apr 20 16:05:40.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j2cz8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9308' Apr 20 16:05:40.954: INFO: stderr: "" Apr 20 16:05:40.954: INFO: stdout: "true" Apr 20 16:05:40.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j2cz8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9308' Apr 20 16:05:41.043: INFO: stderr: "" Apr 20 16:05:41.043: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 20 16:05:41.043: INFO: validating pod update-demo-nautilus-j2cz8 Apr 20 16:05:41.046: INFO: got data: { "image": "nautilus.jpg" } Apr 20 16:05:41.046: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 20 16:05:41.046: INFO: update-demo-nautilus-j2cz8 is verified up and running Apr 20 16:05:41.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-prnwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9308' Apr 20 16:05:41.139: INFO: stderr: "" Apr 20 16:05:41.139: INFO: stdout: "true" Apr 20 16:05:41.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-prnwt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9308' Apr 20 16:05:41.230: INFO: stderr: "" Apr 20 16:05:41.230: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 20 16:05:41.230: INFO: validating pod update-demo-nautilus-prnwt Apr 20 16:05:41.233: INFO: got data: { "image": "nautilus.jpg" } Apr 20 16:05:41.233: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 20 16:05:41.233: INFO: update-demo-nautilus-prnwt is verified up and running STEP: using delete to clean up resources Apr 20 16:05:41.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9308' Apr 20 16:05:41.335: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 20 16:05:41.335: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 20 16:05:41.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9308' Apr 20 16:05:41.445: INFO: stderr: "No resources found.\n" Apr 20 16:05:41.445: INFO: stdout: "" Apr 20 16:05:41.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9308 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 20 16:05:41.547: INFO: stderr: "" Apr 20 16:05:41.547: INFO: stdout: "update-demo-nautilus-j2cz8\nupdate-demo-nautilus-prnwt\n" Apr 20 16:05:42.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9308' Apr 20 16:05:42.148: INFO: stderr: "No resources found.\n" Apr 20 16:05:42.148: INFO: stdout: "" Apr 20 16:05:42.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9308 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 20 16:05:42.232: INFO: stderr: "" Apr 20 16:05:42.232: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:05:42.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9308" for this suite. Apr 20 16:06:04.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:06:04.336: INFO: namespace kubectl-9308 deletion completed in 22.100271338s • [SLOW TEST:32.010 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:06:04.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-94d44c63-2cae-48e3-9bb5-9b97b8b4c9c7 STEP: Creating configMap with name cm-test-opt-upd-5295ed1c-879e-4eaa-a22c-b3f63cc37175 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-94d44c63-2cae-48e3-9bb5-9b97b8b4c9c7 STEP: Updating configmap cm-test-opt-upd-5295ed1c-879e-4eaa-a22c-b3f63cc37175 STEP: Creating configMap with name cm-test-opt-create-dbfc0d05-48ae-44ff-9192-526ad50f95ad STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:06:12.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5343" for this suite. Apr 20 16:06:34.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:06:34.752: INFO: namespace projected-5343 deletion completed in 22.119163903s • [SLOW TEST:30.416 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:06:34.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 20 16:06:38.928: INFO: Waiting up to 5m0s for pod "client-envvars-7b942aa6-6359-42f7-bcc6-6a2fccb53750" in namespace "pods-8055" to be "success or failure" Apr 20 16:06:38.938: INFO: Pod "client-envvars-7b942aa6-6359-42f7-bcc6-6a2fccb53750": Phase="Pending", Reason="", readiness=false. Elapsed: 9.373113ms Apr 20 16:06:40.942: INFO: Pod "client-envvars-7b942aa6-6359-42f7-bcc6-6a2fccb53750": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013562793s Apr 20 16:06:42.946: INFO: Pod "client-envvars-7b942aa6-6359-42f7-bcc6-6a2fccb53750": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01743221s STEP: Saw pod success Apr 20 16:06:42.946: INFO: Pod "client-envvars-7b942aa6-6359-42f7-bcc6-6a2fccb53750" satisfied condition "success or failure" Apr 20 16:06:42.948: INFO: Trying to get logs from node iruya-worker pod client-envvars-7b942aa6-6359-42f7-bcc6-6a2fccb53750 container env3cont: STEP: delete the pod Apr 20 16:06:42.968: INFO: Waiting for pod client-envvars-7b942aa6-6359-42f7-bcc6-6a2fccb53750 to disappear Apr 20 16:06:42.973: INFO: Pod client-envvars-7b942aa6-6359-42f7-bcc6-6a2fccb53750 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:06:42.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8055" for this suite. Apr 20 16:07:33.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:07:33.105: INFO: namespace pods-8055 deletion completed in 50.10942409s • [SLOW TEST:58.353 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:07:33.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 20 16:07:33.181: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:07:41.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-199" for this suite. Apr 20 16:08:05.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:08:05.761: INFO: namespace init-container-199 deletion completed in 24.137925458s • [SLOW TEST:32.656 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:08:05.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 20 16:08:05.821: INFO: Waiting up to 5m0s for pod "pod-712bd3be-1338-47d6-9dc4-76bbdee83f3d" in namespace "emptydir-8643" to be "success or failure" Apr 20 16:08:05.837: INFO: Pod "pod-712bd3be-1338-47d6-9dc4-76bbdee83f3d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.148525ms Apr 20 16:08:07.841: INFO: Pod "pod-712bd3be-1338-47d6-9dc4-76bbdee83f3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020284264s Apr 20 16:08:09.845: INFO: Pod "pod-712bd3be-1338-47d6-9dc4-76bbdee83f3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024235253s STEP: Saw pod success Apr 20 16:08:09.845: INFO: Pod "pod-712bd3be-1338-47d6-9dc4-76bbdee83f3d" satisfied condition "success or failure" Apr 20 16:08:09.848: INFO: Trying to get logs from node iruya-worker pod pod-712bd3be-1338-47d6-9dc4-76bbdee83f3d container test-container: STEP: delete the pod Apr 20 16:08:09.868: INFO: Waiting for pod pod-712bd3be-1338-47d6-9dc4-76bbdee83f3d to disappear Apr 20 16:08:09.888: INFO: Pod pod-712bd3be-1338-47d6-9dc4-76bbdee83f3d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:08:09.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8643" for this suite. Apr 20 16:08:15.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:08:16.049: INFO: namespace emptydir-8643 deletion completed in 6.157403545s • [SLOW TEST:10.288 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:08:16.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-673a99be-3ac4-4f0b-98a3-33596a704ec7 STEP: Creating a pod to test consume configMaps Apr 20 16:08:16.163: INFO: Waiting up to 5m0s for pod "pod-configmaps-6358ec7e-9810-4178-add5-72bdb121ab99" in namespace "configmap-953" to be "success or failure" Apr 20 16:08:16.166: INFO: Pod "pod-configmaps-6358ec7e-9810-4178-add5-72bdb121ab99": Phase="Pending", Reason="", readiness=false. Elapsed: 3.271204ms Apr 20 16:08:18.171: INFO: Pod "pod-configmaps-6358ec7e-9810-4178-add5-72bdb121ab99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007929049s Apr 20 16:08:20.174: INFO: Pod "pod-configmaps-6358ec7e-9810-4178-add5-72bdb121ab99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011211455s STEP: Saw pod success Apr 20 16:08:20.174: INFO: Pod "pod-configmaps-6358ec7e-9810-4178-add5-72bdb121ab99" satisfied condition "success or failure" Apr 20 16:08:20.177: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-6358ec7e-9810-4178-add5-72bdb121ab99 container configmap-volume-test: STEP: delete the pod Apr 20 16:08:20.220: INFO: Waiting for pod pod-configmaps-6358ec7e-9810-4178-add5-72bdb121ab99 to disappear Apr 20 16:08:20.238: INFO: Pod pod-configmaps-6358ec7e-9810-4178-add5-72bdb121ab99 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:08:20.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-953" for this suite. Apr 20 16:08:26.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:08:26.359: INFO: namespace configmap-953 deletion completed in 6.115909886s • [SLOW TEST:10.309 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:08:26.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 20 16:08:34.502: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 20 16:08:34.509: INFO: Pod pod-with-prestop-exec-hook still exists Apr 20 16:08:36.509: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 20 16:08:36.513: INFO: Pod pod-with-prestop-exec-hook still exists Apr 20 16:08:38.509: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 20 16:08:38.512: INFO: Pod pod-with-prestop-exec-hook still exists Apr 20 16:08:40.509: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 20 16:08:40.512: INFO: Pod pod-with-prestop-exec-hook still exists Apr 20 16:08:42.509: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 20 16:08:42.513: INFO: Pod pod-with-prestop-exec-hook still exists Apr 20 16:08:44.509: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 20 16:08:44.513: INFO: Pod pod-with-prestop-exec-hook still exists Apr 20 16:08:46.509: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 20 16:08:46.513: INFO: Pod pod-with-prestop-exec-hook still exists Apr 20 16:08:48.509: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 20 16:08:48.524: INFO: Pod pod-with-prestop-exec-hook still exists Apr 20 16:08:50.509: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 20 16:08:50.513: INFO: Pod pod-with-prestop-exec-hook still exists Apr 20 16:08:52.509: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 20 16:08:52.513: INFO: Pod pod-with-prestop-exec-hook still exists Apr 20 16:08:54.509: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 20 16:08:54.513: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:08:54.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1282" for this suite. Apr 20 16:09:18.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:09:18.628: INFO: namespace container-lifecycle-hook-1282 deletion completed in 24.104184546s • [SLOW TEST:52.269 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:09:18.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 20 16:09:18.653: INFO: Creating ReplicaSet my-hostname-basic-b37747e5-85f0-4d61-aea2-11f92067452c Apr 20 16:09:18.730: INFO: Pod name my-hostname-basic-b37747e5-85f0-4d61-aea2-11f92067452c: Found 0 pods out of 1 Apr 20 16:09:23.735: INFO: Pod name my-hostname-basic-b37747e5-85f0-4d61-aea2-11f92067452c: Found 1 pods out of 1 Apr 20 16:09:23.735: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-b37747e5-85f0-4d61-aea2-11f92067452c" is running Apr 20 16:09:23.738: INFO: Pod "my-hostname-basic-b37747e5-85f0-4d61-aea2-11f92067452c-qw4zr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-04-20 16:09:18 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-04-20 16:09:22 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-04-20 16:09:22 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-04-20 16:09:18 +0000 UTC Reason: Message:}]) Apr 20 16:09:23.738: INFO: Trying to dial the pod Apr 20 16:09:28.750: INFO: Controller my-hostname-basic-b37747e5-85f0-4d61-aea2-11f92067452c: Got expected result from replica 1 [my-hostname-basic-b37747e5-85f0-4d61-aea2-11f92067452c-qw4zr]: "my-hostname-basic-b37747e5-85f0-4d61-aea2-11f92067452c-qw4zr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:09:28.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3201" for this suite. Apr 20 16:09:34.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:09:34.851: INFO: namespace replicaset-3201 deletion completed in 6.096992628s • [SLOW TEST:16.223 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:09:34.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:09:41.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1467" for this suite. Apr 20 16:09:49.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:09:49.586: INFO: namespace namespaces-1467 deletion completed in 8.500754456s STEP: Destroying namespace "nsdeletetest-5417" for this suite. Apr 20 16:09:49.588: INFO: Namespace nsdeletetest-5417 was already deleted STEP: Destroying namespace "nsdeletetest-6139" for this suite. Apr 20 16:09:55.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:09:55.762: INFO: namespace nsdeletetest-6139 deletion completed in 6.174404086s • [SLOW TEST:20.911 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:09:55.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-0a7a6b70-4a43-4449-8a51-f43dadde9688 STEP: Creating a pod to test consume secrets Apr 20 16:09:55.941: INFO: Waiting up to 5m0s for pod "pod-secrets-afd769c0-6f89-4a88-83df-b8a2112e7047" in namespace "secrets-8059" to be "success or failure" Apr 20 16:09:55.953: INFO: Pod "pod-secrets-afd769c0-6f89-4a88-83df-b8a2112e7047": Phase="Pending", Reason="", readiness=false. Elapsed: 11.454938ms Apr 20 16:09:57.956: INFO: Pod "pod-secrets-afd769c0-6f89-4a88-83df-b8a2112e7047": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014682727s Apr 20 16:09:59.959: INFO: Pod "pod-secrets-afd769c0-6f89-4a88-83df-b8a2112e7047": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017779024s Apr 20 16:10:01.963: INFO: Pod "pod-secrets-afd769c0-6f89-4a88-83df-b8a2112e7047": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021373531s STEP: Saw pod success Apr 20 16:10:01.963: INFO: Pod "pod-secrets-afd769c0-6f89-4a88-83df-b8a2112e7047" satisfied condition "success or failure" Apr 20 16:10:01.965: INFO: Trying to get logs from node iruya-worker pod pod-secrets-afd769c0-6f89-4a88-83df-b8a2112e7047 container secret-volume-test: STEP: delete the pod Apr 20 16:10:02.975: INFO: Waiting for pod pod-secrets-afd769c0-6f89-4a88-83df-b8a2112e7047 to disappear Apr 20 16:10:02.978: INFO: Pod pod-secrets-afd769c0-6f89-4a88-83df-b8a2112e7047 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:10:02.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8059" for this suite. Apr 20 16:10:09.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:10:09.446: INFO: namespace secrets-8059 deletion completed in 6.225298298s • [SLOW TEST:13.683 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:10:09.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Apr 20 16:10:21.616: INFO: Pod pod-hostip-4c5a392e-c375-41c2-a468-413576f01986 has hostIP: 172.18.0.3 [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:10:21.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1445" for this suite. Apr 20 16:10:45.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:10:45.768: INFO: namespace pods-1445 deletion completed in 24.148787087s • [SLOW TEST:36.321 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:10:45.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 20 16:10:46.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1030' Apr 20 16:10:47.186: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 20 16:10:47.186: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Apr 20 16:10:49.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1030' Apr 20 16:10:49.323: INFO: stderr: "" Apr 20 16:10:49.323: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:10:49.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1030" for this suite. Apr 20 16:10:55.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:10:55.473: INFO: namespace kubectl-1030 deletion completed in 6.147701881s • [SLOW TEST:9.705 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:10:55.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Apr 20 16:10:56.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2817 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 20 16:11:02.788: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" Apr 20 16:11:02.788: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:11:04.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2817" for this suite. Apr 20 16:11:10.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:11:10.984: INFO: namespace kubectl-2817 deletion completed in 6.133160749s • [SLOW TEST:15.511 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:11:10.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:12:26.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-382" for this suite. Apr 20 16:12:32.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:12:32.532: INFO: namespace container-runtime-382 deletion completed in 6.076510415s • [SLOW TEST:81.547 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:12:32.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Apr 20 16:12:32.631: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:12:32.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7156" for this suite. Apr 20 16:12:38.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:12:38.891: INFO: namespace kubectl-7156 deletion completed in 6.153582599s • [SLOW TEST:6.359 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:12:38.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 20 16:12:38.990: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec06a912-d9cf-4819-93e2-bbc59026200d" in namespace "projected-5365" to be "success or failure" Apr 20 16:12:39.023: INFO: Pod "downwardapi-volume-ec06a912-d9cf-4819-93e2-bbc59026200d": Phase="Pending", Reason="", readiness=false. Elapsed: 32.968954ms Apr 20 16:12:41.026: INFO: Pod "downwardapi-volume-ec06a912-d9cf-4819-93e2-bbc59026200d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035654388s Apr 20 16:12:43.606: INFO: Pod "downwardapi-volume-ec06a912-d9cf-4819-93e2-bbc59026200d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.615525756s Apr 20 16:12:45.613: INFO: Pod "downwardapi-volume-ec06a912-d9cf-4819-93e2-bbc59026200d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.623132864s Apr 20 16:12:47.616: INFO: Pod "downwardapi-volume-ec06a912-d9cf-4819-93e2-bbc59026200d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.626302011s Apr 20 16:12:49.851: INFO: Pod "downwardapi-volume-ec06a912-d9cf-4819-93e2-bbc59026200d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.861350562s Apr 20 16:12:51.988: INFO: Pod "downwardapi-volume-ec06a912-d9cf-4819-93e2-bbc59026200d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.998300131s Apr 20 16:12:53.992: INFO: Pod "downwardapi-volume-ec06a912-d9cf-4819-93e2-bbc59026200d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.001462207s Apr 20 16:12:55.994: INFO: Pod "downwardapi-volume-ec06a912-d9cf-4819-93e2-bbc59026200d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.004127472s Apr 20 16:12:58.127: INFO: Pod "downwardapi-volume-ec06a912-d9cf-4819-93e2-bbc59026200d": Phase="Running", Reason="", readiness=true. Elapsed: 19.136559774s Apr 20 16:13:00.129: INFO: Pod "downwardapi-volume-ec06a912-d9cf-4819-93e2-bbc59026200d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.139286562s STEP: Saw pod success Apr 20 16:13:00.129: INFO: Pod "downwardapi-volume-ec06a912-d9cf-4819-93e2-bbc59026200d" satisfied condition "success or failure" Apr 20 16:13:00.131: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ec06a912-d9cf-4819-93e2-bbc59026200d container client-container: STEP: delete the pod Apr 20 16:13:00.212: INFO: Waiting for pod downwardapi-volume-ec06a912-d9cf-4819-93e2-bbc59026200d to disappear Apr 20 16:13:00.221: INFO: Pod downwardapi-volume-ec06a912-d9cf-4819-93e2-bbc59026200d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:13:00.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5365" for this suite. Apr 20 16:13:06.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:13:06.336: INFO: namespace projected-5365 deletion completed in 6.112922802s • [SLOW TEST:27.445 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:13:06.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 20 16:13:15.534: INFO: 10 pods remaining Apr 20 16:13:15.534: INFO: 10 pods has nil DeletionTimestamp Apr 20 16:13:15.534: INFO: Apr 20 16:13:16.420: INFO: 0 pods remaining Apr 20 16:13:16.420: INFO: 0 pods has nil DeletionTimestamp Apr 20 16:13:16.420: INFO: Apr 20 16:13:17.929: INFO: 0 pods remaining Apr 20 16:13:17.929: INFO: 0 pods has nil DeletionTimestamp Apr 20 16:13:17.929: INFO: STEP: Gathering metrics W0420 16:13:18.132555 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 20 16:13:18.132: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:13:18.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4247" for this suite. Apr 20 16:13:24.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:13:24.471: INFO: namespace gc-4247 deletion completed in 6.336600978s • [SLOW TEST:18.134 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:13:24.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 20 16:13:24.565: INFO: Creating deployment "test-recreate-deployment" Apr 20 16:13:24.580: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 20 16:13:24.606: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 20 16:13:28.475: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 20 16:13:28.594: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754532004, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754532004, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754532004, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754532004, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 20 16:13:30.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754532004, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754532004, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754532004, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754532004, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 20 16:13:32.634: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754532004, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754532004, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754532004, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754532004, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 20 16:13:34.597: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 20 16:13:34.601: INFO: Updating deployment test-recreate-deployment Apr 20 16:13:34.601: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 20 16:13:34.826: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-6868,SelfLink:/apis/apps/v1/namespaces/deployment-6868/deployments/test-recreate-deployment,UID:5c16251d-3650-475d-bf9b-ef0e2d4ee2bf,ResourceVersion:1293137,Generation:2,CreationTimestamp:2021-04-20 16:13:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2021-04-20 16:13:34 +0000 UTC 2021-04-20 16:13:34 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2021-04-20 16:13:34 +0000 UTC 2021-04-20 16:13:24 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Apr 20 16:13:34.831: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-6868,SelfLink:/apis/apps/v1/namespaces/deployment-6868/replicasets/test-recreate-deployment-5c8c9cc69d,UID:09483142-8320-43e9-b46a-f74da0ac9466,ResourceVersion:1293135,Generation:1,CreationTimestamp:2021-04-20 16:13:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 5c16251d-3650-475d-bf9b-ef0e2d4ee2bf 0xc002a850b7 0xc002a850b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 20 16:13:34.831: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 20 16:13:34.831: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-6868,SelfLink:/apis/apps/v1/namespaces/deployment-6868/replicasets/test-recreate-deployment-6df85df6b9,UID:46ec08fb-347d-45f2-9b62-593a530cd531,ResourceVersion:1293126,Generation:2,CreationTimestamp:2021-04-20 16:13:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 5c16251d-3650-475d-bf9b-ef0e2d4ee2bf 0xc002a85187 0xc002a85188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 20 16:13:34.834: INFO: Pod "test-recreate-deployment-5c8c9cc69d-4jh66" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-4jh66,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-6868,SelfLink:/api/v1/namespaces/deployment-6868/pods/test-recreate-deployment-5c8c9cc69d-4jh66,UID:968699a9-f8e3-4350-b39c-ebe7d96b615d,ResourceVersion:1293138,Generation:0,CreationTimestamp:2021-04-20 16:13:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 09483142-8320-43e9-b46a-f74da0ac9466 0xc002a85a27 0xc002a85a28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wmtzm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wmtzm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wmtzm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a85aa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a85ac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 16:13:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 16:13:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 16:13:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 16:13:34 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-04-20 16:13:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:13:34.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6868" for this suite. Apr 20 16:13:40.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:13:40.934: INFO: namespace deployment-6868 deletion completed in 6.097221914s • [SLOW TEST:16.463 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:13:40.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-84f7a7da-3540-40b8-9567-0b1a46d328f5 STEP: Creating a pod to test consume secrets Apr 20 16:13:41.657: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-051e19ab-7a90-42b6-805e-b7c5a1d23d92" in namespace "projected-5780" to be "success or failure" Apr 20 16:13:41.681: INFO: Pod "pod-projected-secrets-051e19ab-7a90-42b6-805e-b7c5a1d23d92": Phase="Pending", Reason="", readiness=false. Elapsed: 23.933119ms Apr 20 16:13:43.702: INFO: Pod "pod-projected-secrets-051e19ab-7a90-42b6-805e-b7c5a1d23d92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044737921s Apr 20 16:13:45.732: INFO: Pod "pod-projected-secrets-051e19ab-7a90-42b6-805e-b7c5a1d23d92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074671502s Apr 20 16:13:47.744: INFO: Pod "pod-projected-secrets-051e19ab-7a90-42b6-805e-b7c5a1d23d92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086919892s Apr 20 16:13:49.746: INFO: Pod "pod-projected-secrets-051e19ab-7a90-42b6-805e-b7c5a1d23d92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089236483s STEP: Saw pod success Apr 20 16:13:49.747: INFO: Pod "pod-projected-secrets-051e19ab-7a90-42b6-805e-b7c5a1d23d92" satisfied condition "success or failure" Apr 20 16:13:49.748: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-051e19ab-7a90-42b6-805e-b7c5a1d23d92 container projected-secret-volume-test: STEP: delete the pod Apr 20 16:13:49.865: INFO: Waiting for pod pod-projected-secrets-051e19ab-7a90-42b6-805e-b7c5a1d23d92 to disappear Apr 20 16:13:49.894: INFO: Pod pod-projected-secrets-051e19ab-7a90-42b6-805e-b7c5a1d23d92 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:13:49.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5780" for this suite. Apr 20 16:13:55.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:13:56.011: INFO: namespace projected-5780 deletion completed in 6.114514675s • [SLOW TEST:15.077 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:13:56.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 20 16:13:56.146: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"113e0552-a46c-4089-8f63-3a0feb3aae02", Controller:(*bool)(0xc00209d9d2), BlockOwnerDeletion:(*bool)(0xc00209d9d3)}} Apr 20 16:13:56.158: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"bef734a7-cffd-4f49-aade-384d50c88387", Controller:(*bool)(0xc00309d4ea), BlockOwnerDeletion:(*bool)(0xc00309d4eb)}} Apr 20 16:13:56.224: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"80b36c86-60bb-4a29-8000-d960b0218971", Controller:(*bool)(0xc00309d67a), BlockOwnerDeletion:(*bool)(0xc00309d67b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:14:01.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4922" for this suite. Apr 20 16:14:07.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:14:08.070: INFO: namespace gc-4922 deletion completed in 6.487071868s • [SLOW TEST:12.059 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:14:08.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 20 16:14:08.162: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c18057a-551c-4f1e-9c49-9e0ffc3dedd5" in namespace "downward-api-1474" to be "success or failure" Apr 20 16:14:08.167: INFO: Pod "downwardapi-volume-4c18057a-551c-4f1e-9c49-9e0ffc3dedd5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.773084ms Apr 20 16:14:10.685: INFO: Pod "downwardapi-volume-4c18057a-551c-4f1e-9c49-9e0ffc3dedd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.523067303s Apr 20 16:14:12.960: INFO: Pod "downwardapi-volume-4c18057a-551c-4f1e-9c49-9e0ffc3dedd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.798616244s Apr 20 16:14:15.014: INFO: Pod "downwardapi-volume-4c18057a-551c-4f1e-9c49-9e0ffc3dedd5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.852136813s Apr 20 16:14:17.840: INFO: Pod "downwardapi-volume-4c18057a-551c-4f1e-9c49-9e0ffc3dedd5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.678239359s Apr 20 16:14:19.843: INFO: Pod "downwardapi-volume-4c18057a-551c-4f1e-9c49-9e0ffc3dedd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.681042978s STEP: Saw pod success Apr 20 16:14:19.843: INFO: Pod "downwardapi-volume-4c18057a-551c-4f1e-9c49-9e0ffc3dedd5" satisfied condition "success or failure" Apr 20 16:14:19.845: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-4c18057a-551c-4f1e-9c49-9e0ffc3dedd5 container client-container: STEP: delete the pod Apr 20 16:14:19.973: INFO: Waiting for pod downwardapi-volume-4c18057a-551c-4f1e-9c49-9e0ffc3dedd5 to disappear Apr 20 16:14:19.999: INFO: Pod downwardapi-volume-4c18057a-551c-4f1e-9c49-9e0ffc3dedd5 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:14:19.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1474" for this suite. Apr 20 16:14:28.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:14:28.086: INFO: namespace downward-api-1474 deletion completed in 8.084016893s • [SLOW TEST:20.015 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:14:28.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Apr 20 16:14:28.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4285' Apr 20 16:14:28.484: INFO: stderr: "" Apr 20 16:14:28.485: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Apr 20 16:14:29.488: INFO: Selector matched 1 pods for map[app:redis] Apr 20 16:14:29.488: INFO: Found 0 / 1 Apr 20 16:14:30.488: INFO: Selector matched 1 pods for map[app:redis] Apr 20 16:14:30.488: INFO: Found 0 / 1 Apr 20 16:14:31.709: INFO: Selector matched 1 pods for map[app:redis] Apr 20 16:14:31.709: INFO: Found 0 / 1 Apr 20 16:14:32.487: INFO: Selector matched 1 pods for map[app:redis] Apr 20 16:14:32.487: INFO: Found 0 / 1 Apr 20 16:14:33.487: INFO: Selector matched 1 pods for map[app:redis] Apr 20 16:14:33.487: INFO: Found 0 / 1 Apr 20 16:14:34.488: INFO: Selector matched 1 pods for map[app:redis] Apr 20 16:14:34.488: INFO: Found 1 / 1 Apr 20 16:14:34.488: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 20 16:14:34.490: INFO: Selector matched 1 pods for map[app:redis] Apr 20 16:14:34.490: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Apr 20 16:14:34.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-99b6d redis-master --namespace=kubectl-4285' Apr 20 16:14:34.583: INFO: stderr: "" Apr 20 16:14:34.583: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Apr 16:14:33.589 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Apr 16:14:33.589 # Server started, Redis version 3.2.12\n1:M 20 Apr 16:14:33.589 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Apr 16:14:33.589 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Apr 20 16:14:34.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-99b6d redis-master --namespace=kubectl-4285 --tail=1' Apr 20 16:14:34.663: INFO: stderr: "" Apr 20 16:14:34.663: INFO: stdout: "1:M 20 Apr 16:14:33.589 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Apr 20 16:14:34.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-99b6d redis-master --namespace=kubectl-4285 --limit-bytes=1' Apr 20 16:14:34.757: INFO: stderr: "" Apr 20 16:14:34.757: INFO: stdout: " " STEP: exposing timestamps Apr 20 16:14:34.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-99b6d redis-master --namespace=kubectl-4285 --tail=1 --timestamps' Apr 20 16:14:34.860: INFO: stderr: "" Apr 20 16:14:34.860: INFO: stdout: "2021-04-20T16:14:33.58999935Z 1:M 20 Apr 16:14:33.589 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Apr 20 16:14:37.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-99b6d redis-master --namespace=kubectl-4285 --since=1s' Apr 20 16:14:37.457: INFO: stderr: "" Apr 20 16:14:37.457: INFO: stdout: "" Apr 20 16:14:37.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-99b6d redis-master --namespace=kubectl-4285 --since=24h' Apr 20 16:14:37.565: INFO: stderr: "" Apr 20 16:14:37.565: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Apr 16:14:33.589 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Apr 16:14:33.589 # Server started, Redis version 3.2.12\n1:M 20 Apr 16:14:33.589 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Apr 16:14:33.589 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Apr 20 16:14:37.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4285' Apr 20 16:14:37.658: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 20 16:14:37.658: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Apr 20 16:14:37.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-4285' Apr 20 16:14:37.739: INFO: stderr: "No resources found.\n" Apr 20 16:14:37.739: INFO: stdout: "" Apr 20 16:14:37.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-4285 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 20 16:14:37.822: INFO: stderr: "" Apr 20 16:14:37.822: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:14:37.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4285" for this suite. Apr 20 16:14:43.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:14:43.903: INFO: namespace kubectl-4285 deletion completed in 6.077964462s • [SLOW TEST:15.817 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:14:43.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-c6d6068b-874e-491e-91bf-3d807b334343 STEP: Creating a pod to test consume configMaps Apr 20 16:14:44.072: INFO: Waiting up to 5m0s for pod "pod-configmaps-b3f41cf3-8c44-40dd-b5cd-68d644387d73" in namespace "configmap-2039" to be "success or failure" Apr 20 16:14:44.159: INFO: Pod "pod-configmaps-b3f41cf3-8c44-40dd-b5cd-68d644387d73": Phase="Pending", Reason="", readiness=false. Elapsed: 86.942177ms Apr 20 16:14:46.466: INFO: Pod "pod-configmaps-b3f41cf3-8c44-40dd-b5cd-68d644387d73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.393952879s Apr 20 16:14:48.469: INFO: Pod "pod-configmaps-b3f41cf3-8c44-40dd-b5cd-68d644387d73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397073579s Apr 20 16:14:50.472: INFO: Pod "pod-configmaps-b3f41cf3-8c44-40dd-b5cd-68d644387d73": Phase="Pending", Reason="", readiness=false. Elapsed: 6.400496211s Apr 20 16:14:52.475: INFO: Pod "pod-configmaps-b3f41cf3-8c44-40dd-b5cd-68d644387d73": Phase="Running", Reason="", readiness=true. Elapsed: 8.403267415s Apr 20 16:14:54.815: INFO: Pod "pod-configmaps-b3f41cf3-8c44-40dd-b5cd-68d644387d73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.743600111s STEP: Saw pod success Apr 20 16:14:54.815: INFO: Pod "pod-configmaps-b3f41cf3-8c44-40dd-b5cd-68d644387d73" satisfied condition "success or failure" Apr 20 16:14:54.913: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-b3f41cf3-8c44-40dd-b5cd-68d644387d73 container configmap-volume-test: STEP: delete the pod Apr 20 16:14:54.966: INFO: Waiting for pod pod-configmaps-b3f41cf3-8c44-40dd-b5cd-68d644387d73 to disappear Apr 20 16:14:55.212: INFO: Pod pod-configmaps-b3f41cf3-8c44-40dd-b5cd-68d644387d73 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:14:55.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2039" for this suite. Apr 20 16:15:01.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:15:01.386: INFO: namespace configmap-2039 deletion completed in 6.130862277s • [SLOW TEST:17.482 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:15:01.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 20 16:15:01.481: INFO: Waiting up to 5m0s for pod "pod-e5494d11-c474-4d8c-b5e6-e27eb5037bf7" in namespace "emptydir-1203" to be "success or failure" Apr 20 16:15:01.510: INFO: Pod "pod-e5494d11-c474-4d8c-b5e6-e27eb5037bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 29.838133ms Apr 20 16:15:03.829: INFO: Pod "pod-e5494d11-c474-4d8c-b5e6-e27eb5037bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348615516s Apr 20 16:15:05.831: INFO: Pod "pod-e5494d11-c474-4d8c-b5e6-e27eb5037bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350906366s Apr 20 16:15:08.082: INFO: Pod "pod-e5494d11-c474-4d8c-b5e6-e27eb5037bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.601343646s Apr 20 16:15:10.085: INFO: Pod "pod-e5494d11-c474-4d8c-b5e6-e27eb5037bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.604598327s Apr 20 16:15:12.093: INFO: Pod "pod-e5494d11-c474-4d8c-b5e6-e27eb5037bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.612686207s Apr 20 16:15:14.123: INFO: Pod "pod-e5494d11-c474-4d8c-b5e6-e27eb5037bf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.642385544s STEP: Saw pod success Apr 20 16:15:14.123: INFO: Pod "pod-e5494d11-c474-4d8c-b5e6-e27eb5037bf7" satisfied condition "success or failure" Apr 20 16:15:14.125: INFO: Trying to get logs from node iruya-worker pod pod-e5494d11-c474-4d8c-b5e6-e27eb5037bf7 container test-container: STEP: delete the pod Apr 20 16:15:14.174: INFO: Waiting for pod pod-e5494d11-c474-4d8c-b5e6-e27eb5037bf7 to disappear Apr 20 16:15:14.207: INFO: Pod pod-e5494d11-c474-4d8c-b5e6-e27eb5037bf7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:15:14.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1203" for this suite. Apr 20 16:15:20.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:15:20.375: INFO: namespace emptydir-1203 deletion completed in 6.165358771s • [SLOW TEST:18.989 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:15:20.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-3572f76f-1a96-4982-b8f0-577d55a29635 STEP: Creating a pod to test consume configMaps Apr 20 16:15:20.448: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-97187283-b7e2-41d2-bef1-a4de1718e921" in namespace "projected-6561" to be "success or failure" Apr 20 16:15:20.453: INFO: Pod "pod-projected-configmaps-97187283-b7e2-41d2-bef1-a4de1718e921": Phase="Pending", Reason="", readiness=false. Elapsed: 4.267414ms Apr 20 16:15:22.709: INFO: Pod "pod-projected-configmaps-97187283-b7e2-41d2-bef1-a4de1718e921": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260309437s Apr 20 16:15:24.713: INFO: Pod "pod-projected-configmaps-97187283-b7e2-41d2-bef1-a4de1718e921": Phase="Pending", Reason="", readiness=false. Elapsed: 4.264068359s Apr 20 16:15:26.902: INFO: Pod "pod-projected-configmaps-97187283-b7e2-41d2-bef1-a4de1718e921": Phase="Pending", Reason="", readiness=false. Elapsed: 6.453466577s Apr 20 16:15:28.904: INFO: Pod "pod-projected-configmaps-97187283-b7e2-41d2-bef1-a4de1718e921": Phase="Pending", Reason="", readiness=false. Elapsed: 8.455662307s Apr 20 16:15:30.907: INFO: Pod "pod-projected-configmaps-97187283-b7e2-41d2-bef1-a4de1718e921": Phase="Running", Reason="", readiness=true. Elapsed: 10.458530188s Apr 20 16:15:32.910: INFO: Pod "pod-projected-configmaps-97187283-b7e2-41d2-bef1-a4de1718e921": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.461140672s STEP: Saw pod success Apr 20 16:15:32.910: INFO: Pod "pod-projected-configmaps-97187283-b7e2-41d2-bef1-a4de1718e921" satisfied condition "success or failure" Apr 20 16:15:32.911: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-97187283-b7e2-41d2-bef1-a4de1718e921 container projected-configmap-volume-test: STEP: delete the pod Apr 20 16:15:33.378: INFO: Waiting for pod pod-projected-configmaps-97187283-b7e2-41d2-bef1-a4de1718e921 to disappear Apr 20 16:15:33.446: INFO: Pod pod-projected-configmaps-97187283-b7e2-41d2-bef1-a4de1718e921 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:15:33.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6561" for this suite. Apr 20 16:15:39.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:15:39.531: INFO: namespace projected-6561 deletion completed in 6.082051151s • [SLOW TEST:19.155 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:15:39.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 20 16:15:39.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-7578' Apr 20 16:15:48.024: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 20 16:15:48.024: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Apr 20 16:15:48.065: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Apr 20 16:15:48.079: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 20 16:15:48.124: INFO: scanned /root for discovery docs: Apr 20 16:15:48.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7578' Apr 20 16:16:12.589: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 20 16:16:12.589: INFO: stdout: "Created e2e-test-nginx-rc-f640e725204ed0800337ff476d0cde94\nScaling up e2e-test-nginx-rc-f640e725204ed0800337ff476d0cde94 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-f640e725204ed0800337ff476d0cde94 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-f640e725204ed0800337ff476d0cde94 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Apr 20 16:16:12.589: INFO: stdout: "Created e2e-test-nginx-rc-f640e725204ed0800337ff476d0cde94\nScaling up e2e-test-nginx-rc-f640e725204ed0800337ff476d0cde94 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-f640e725204ed0800337ff476d0cde94 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-f640e725204ed0800337ff476d0cde94 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Apr 20 16:16:12.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-7578' Apr 20 16:16:12.726: INFO: stderr: "" Apr 20 16:16:12.726: INFO: stdout: "e2e-test-nginx-rc-f640e725204ed0800337ff476d0cde94-489cz e2e-test-nginx-rc-lnm8x " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Apr 20 16:16:17.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-7578' Apr 20 16:16:17.811: INFO: stderr: "" Apr 20 16:16:17.811: INFO: stdout: "e2e-test-nginx-rc-f640e725204ed0800337ff476d0cde94-489cz e2e-test-nginx-rc-lnm8x " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Apr 20 16:16:22.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-7578' Apr 20 16:16:22.892: INFO: stderr: "" Apr 20 16:16:22.892: INFO: stdout: "e2e-test-nginx-rc-f640e725204ed0800337ff476d0cde94-489cz " Apr 20 16:16:22.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-f640e725204ed0800337ff476d0cde94-489cz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7578' Apr 20 16:16:22.983: INFO: stderr: "" Apr 20 16:16:22.983: INFO: stdout: "true" Apr 20 16:16:22.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-f640e725204ed0800337ff476d0cde94-489cz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7578' Apr 20 16:16:23.072: INFO: stderr: "" Apr 20 16:16:23.072: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Apr 20 16:16:23.072: INFO: e2e-test-nginx-rc-f640e725204ed0800337ff476d0cde94-489cz is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Apr 20 16:16:23.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-7578' Apr 20 16:16:23.167: INFO: stderr: "" Apr 20 16:16:23.167: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:16:23.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7578" for this suite. Apr 20 16:17:05.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:17:05.291: INFO: namespace kubectl-7578 deletion completed in 42.120637552s • [SLOW TEST:85.760 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:17:05.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5655 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 20 16:17:06.886: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 20 16:18:02.064: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.202 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5655 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 16:18:02.064: INFO: >>> kubeConfig: /root/.kube/config Apr 20 16:18:03.188: INFO: Found all expected endpoints: [netserver-0] Apr 20 16:18:03.206: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.15 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5655 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 16:18:03.206: INFO: >>> kubeConfig: /root/.kube/config Apr 20 16:18:04.312: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:18:04.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5655" for this suite. Apr 20 16:18:31.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:18:31.144: INFO: namespace pod-network-test-5655 deletion completed in 26.59730811s • [SLOW TEST:85.852 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:18:31.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-3845 I0420 16:18:31.573161 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3845, replica count: 1 I0420 16:18:32.623560 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0420 16:18:33.623773 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0420 16:18:34.623989 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0420 16:18:35.624333 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0420 16:18:36.624553 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0420 16:18:37.624746 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0420 16:18:38.624972 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0420 16:18:39.625147 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 20 16:18:39.845: INFO: Created: latency-svc-zjns8 Apr 20 16:18:39.849: INFO: Got endpoints: latency-svc-zjns8 [123.67213ms] Apr 20 16:18:40.013: INFO: Created: latency-svc-7nndc Apr 20 16:18:40.016: INFO: Got endpoints: latency-svc-7nndc [166.690665ms] Apr 20 16:18:40.033: INFO: Created: latency-svc-t4bbc Apr 20 16:18:40.058: INFO: Got endpoints: latency-svc-t4bbc [208.817041ms] Apr 20 16:18:40.087: INFO: Created: latency-svc-5bhtb Apr 20 16:18:40.193: INFO: Got endpoints: latency-svc-5bhtb [343.247846ms] Apr 20 16:18:40.195: INFO: Created: latency-svc-564pn Apr 20 16:18:40.232: INFO: Got endpoints: latency-svc-564pn [381.051856ms] Apr 20 16:18:40.348: INFO: Created: latency-svc-dnxm6 Apr 20 16:18:40.371: INFO: Got endpoints: latency-svc-dnxm6 [521.597155ms] Apr 20 16:18:40.420: INFO: Created: latency-svc-5nctg Apr 20 16:18:40.428: INFO: Got endpoints: latency-svc-5nctg [577.865849ms] Apr 20 16:18:40.445: INFO: Created: latency-svc-gzrfd Apr 20 16:18:40.473: INFO: Got endpoints: latency-svc-gzrfd [622.950407ms] Apr 20 16:18:40.481: INFO: Created: latency-svc-pvh2v Apr 20 16:18:40.494: INFO: Got endpoints: latency-svc-pvh2v [643.217748ms] Apr 20 16:18:40.510: INFO: Created: latency-svc-t9f4s Apr 20 16:18:40.524: INFO: Got endpoints: latency-svc-t9f4s [674.179614ms] Apr 20 16:18:40.565: INFO: Created: latency-svc-swxpc Apr 20 16:18:40.623: INFO: Got endpoints: latency-svc-swxpc [773.109056ms] Apr 20 16:18:40.625: INFO: Created: latency-svc-gv4nx Apr 20 16:18:40.650: INFO: Got endpoints: latency-svc-gv4nx [799.962925ms] Apr 20 16:18:40.693: INFO: Created: latency-svc-7qqhc Apr 20 16:18:40.704: INFO: Got endpoints: latency-svc-7qqhc [853.8301ms] Apr 20 16:18:40.768: INFO: Created: latency-svc-mtr4j Apr 20 16:18:40.782: INFO: Got endpoints: latency-svc-mtr4j [931.519096ms] Apr 20 16:18:40.783: INFO: Created: latency-svc-9xlmt Apr 20 16:18:40.807: INFO: Got endpoints: latency-svc-9xlmt [956.32336ms] Apr 20 16:18:40.832: INFO: Created: latency-svc-mnv9c Apr 20 16:18:40.843: INFO: Got endpoints: latency-svc-mnv9c [992.882465ms] Apr 20 16:18:40.869: INFO: Created: latency-svc-2bsqk Apr 20 16:18:40.887: INFO: Got endpoints: latency-svc-2bsqk [871.042159ms] Apr 20 16:18:40.904: INFO: Created: latency-svc-nhmlt Apr 20 16:18:40.921: INFO: Got endpoints: latency-svc-nhmlt [862.810828ms] Apr 20 16:18:40.958: INFO: Created: latency-svc-s9g89 Apr 20 16:18:40.974: INFO: Got endpoints: latency-svc-s9g89 [781.633752ms] Apr 20 16:18:41.037: INFO: Created: latency-svc-4njfh Apr 20 16:18:41.098: INFO: Got endpoints: latency-svc-4njfh [865.819761ms] Apr 20 16:18:41.098: INFO: Created: latency-svc-4bbrl Apr 20 16:18:41.180: INFO: Got endpoints: latency-svc-4bbrl [808.580172ms] Apr 20 16:18:41.211: INFO: Created: latency-svc-xsv6d Apr 20 16:18:41.237: INFO: Got endpoints: latency-svc-xsv6d [808.904448ms] Apr 20 16:18:41.302: INFO: Created: latency-svc-2dmvv Apr 20 16:18:41.309: INFO: Got endpoints: latency-svc-2dmvv [835.369395ms] Apr 20 16:18:41.377: INFO: Created: latency-svc-hbsmx Apr 20 16:18:41.479: INFO: Got endpoints: latency-svc-hbsmx [985.400042ms] Apr 20 16:18:41.518: INFO: Created: latency-svc-f6hxs Apr 20 16:18:41.548: INFO: Got endpoints: latency-svc-f6hxs [1.024321757s] Apr 20 16:18:42.146: INFO: Created: latency-svc-229kk Apr 20 16:18:42.195: INFO: Got endpoints: latency-svc-229kk [1.572407708s] Apr 20 16:18:42.692: INFO: Created: latency-svc-b5nbl Apr 20 16:18:43.401: INFO: Got endpoints: latency-svc-b5nbl [2.75123934s] Apr 20 16:18:44.586: INFO: Created: latency-svc-htt75 Apr 20 16:18:45.109: INFO: Got endpoints: latency-svc-htt75 [4.404814894s] Apr 20 16:18:45.483: INFO: Created: latency-svc-thr8w Apr 20 16:18:45.555: INFO: Got endpoints: latency-svc-thr8w [4.772530346s] Apr 20 16:18:45.703: INFO: Created: latency-svc-gfvq5 Apr 20 16:18:45.750: INFO: Got endpoints: latency-svc-gfvq5 [4.942656124s] Apr 20 16:18:45.764: INFO: Created: latency-svc-7jncn Apr 20 16:18:45.771: INFO: Got endpoints: latency-svc-7jncn [4.928010367s] Apr 20 16:18:45.898: INFO: Created: latency-svc-g5xcl Apr 20 16:18:45.939: INFO: Created: latency-svc-wmvpf Apr 20 16:18:45.940: INFO: Got endpoints: latency-svc-g5xcl [5.05345128s] Apr 20 16:18:45.989: INFO: Created: latency-svc-8pvjp Apr 20 16:18:45.989: INFO: Got endpoints: latency-svc-wmvpf [5.068084061s] Apr 20 16:18:46.085: INFO: Got endpoints: latency-svc-8pvjp [5.110572815s] Apr 20 16:18:46.550: INFO: Created: latency-svc-5dn9j Apr 20 16:18:46.691: INFO: Got endpoints: latency-svc-5dn9j [5.592858324s] Apr 20 16:18:47.171: INFO: Created: latency-svc-lf7vh Apr 20 16:18:47.331: INFO: Got endpoints: latency-svc-lf7vh [6.151421501s] Apr 20 16:18:48.370: INFO: Created: latency-svc-wmhw8 Apr 20 16:18:49.146: INFO: Got endpoints: latency-svc-wmhw8 [7.908956454s] Apr 20 16:18:49.501: INFO: Created: latency-svc-qjhkd Apr 20 16:18:49.700: INFO: Got endpoints: latency-svc-qjhkd [8.391633428s] Apr 20 16:18:49.701: INFO: Created: latency-svc-xwjng Apr 20 16:18:49.873: INFO: Got endpoints: latency-svc-xwjng [8.393945162s] Apr 20 16:18:50.583: INFO: Created: latency-svc-7kns5 Apr 20 16:18:50.603: INFO: Got endpoints: latency-svc-7kns5 [9.055008689s] Apr 20 16:18:51.279: INFO: Created: latency-svc-h8dpn Apr 20 16:18:51.978: INFO: Got endpoints: latency-svc-h8dpn [9.783073593s] Apr 20 16:18:52.755: INFO: Created: latency-svc-4slk4 Apr 20 16:18:52.820: INFO: Got endpoints: latency-svc-4slk4 [9.418966493s] Apr 20 16:18:54.841: INFO: Created: latency-svc-c86ql Apr 20 16:18:55.469: INFO: Got endpoints: latency-svc-c86ql [10.359685354s] Apr 20 16:18:57.440: INFO: Created: latency-svc-lk4lx Apr 20 16:18:59.601: INFO: Got endpoints: latency-svc-lk4lx [14.046155268s] Apr 20 16:19:01.268: INFO: Created: latency-svc-zbjdp Apr 20 16:19:02.107: INFO: Got endpoints: latency-svc-zbjdp [16.357083584s] Apr 20 16:19:02.821: INFO: Created: latency-svc-7rfl5 Apr 20 16:19:02.918: INFO: Got endpoints: latency-svc-7rfl5 [17.146570604s] Apr 20 16:19:03.544: INFO: Created: latency-svc-fmkmt Apr 20 16:19:04.900: INFO: Got endpoints: latency-svc-fmkmt [18.959503682s] Apr 20 16:19:06.570: INFO: Created: latency-svc-7bngm Apr 20 16:19:06.573: INFO: Got endpoints: latency-svc-7bngm [20.583919489s] Apr 20 16:19:08.136: INFO: Created: latency-svc-nbbfq Apr 20 16:19:08.202: INFO: Got endpoints: latency-svc-nbbfq [22.117541289s] Apr 20 16:19:09.512: INFO: Created: latency-svc-j8zwt Apr 20 16:19:09.520: INFO: Got endpoints: latency-svc-j8zwt [22.829479819s] Apr 20 16:19:09.769: INFO: Created: latency-svc-zkhjq Apr 20 16:19:09.808: INFO: Got endpoints: latency-svc-zkhjq [22.476278529s] Apr 20 16:19:10.678: INFO: Created: latency-svc-26dnh Apr 20 16:19:11.026: INFO: Got endpoints: latency-svc-26dnh [21.879284262s] Apr 20 16:19:11.027: INFO: Created: latency-svc-t9qvq Apr 20 16:19:11.036: INFO: Got endpoints: latency-svc-t9qvq [21.335507018s] Apr 20 16:19:11.685: INFO: Created: latency-svc-fj4w7 Apr 20 16:19:11.749: INFO: Got endpoints: latency-svc-fj4w7 [21.876010061s] Apr 20 16:19:12.013: INFO: Created: latency-svc-fhs9b Apr 20 16:19:12.030: INFO: Got endpoints: latency-svc-fhs9b [21.426491749s] Apr 20 16:19:12.523: INFO: Created: latency-svc-z9pgk Apr 20 16:19:12.821: INFO: Got endpoints: latency-svc-z9pgk [20.842981753s] Apr 20 16:19:12.990: INFO: Created: latency-svc-8srpp Apr 20 16:19:13.024: INFO: Got endpoints: latency-svc-8srpp [20.203942876s] Apr 20 16:19:13.084: INFO: Created: latency-svc-hjkht Apr 20 16:19:13.121: INFO: Got endpoints: latency-svc-hjkht [17.651849294s] Apr 20 16:19:13.145: INFO: Created: latency-svc-65db4 Apr 20 16:19:13.187: INFO: Got endpoints: latency-svc-65db4 [13.585653019s] Apr 20 16:19:13.267: INFO: Created: latency-svc-f9pc7 Apr 20 16:19:13.293: INFO: Got endpoints: latency-svc-f9pc7 [11.186144667s] Apr 20 16:19:13.749: INFO: Created: latency-svc-44tgh Apr 20 16:19:14.019: INFO: Got endpoints: latency-svc-44tgh [11.101587194s] Apr 20 16:19:14.386: INFO: Created: latency-svc-lgzxj Apr 20 16:19:14.419: INFO: Got endpoints: latency-svc-lgzxj [9.518926398s] Apr 20 16:19:14.520: INFO: Created: latency-svc-sxrl8 Apr 20 16:19:14.720: INFO: Got endpoints: latency-svc-sxrl8 [8.146811026s] Apr 20 16:19:15.066: INFO: Created: latency-svc-r5b6t Apr 20 16:19:15.115: INFO: Got endpoints: latency-svc-r5b6t [6.912012162s] Apr 20 16:19:15.463: INFO: Created: latency-svc-kndn7 Apr 20 16:19:15.993: INFO: Created: latency-svc-gjdnm Apr 20 16:19:15.993: INFO: Got endpoints: latency-svc-kndn7 [6.472471193s] Apr 20 16:19:16.193: INFO: Got endpoints: latency-svc-gjdnm [6.385120572s] Apr 20 16:19:16.409: INFO: Created: latency-svc-xhlz9 Apr 20 16:19:16.721: INFO: Created: latency-svc-5w85t Apr 20 16:19:16.721: INFO: Got endpoints: latency-svc-xhlz9 [5.695056404s] Apr 20 16:19:16.755: INFO: Got endpoints: latency-svc-5w85t [5.71885038s] Apr 20 16:19:17.041: INFO: Created: latency-svc-9n7gd Apr 20 16:19:17.396: INFO: Got endpoints: latency-svc-9n7gd [5.646497623s] Apr 20 16:19:17.661: INFO: Created: latency-svc-fcxk2 Apr 20 16:19:17.690: INFO: Got endpoints: latency-svc-fcxk2 [5.659972167s] Apr 20 16:19:17.883: INFO: Created: latency-svc-4r96g Apr 20 16:19:17.923: INFO: Got endpoints: latency-svc-4r96g [5.10137715s] Apr 20 16:19:18.069: INFO: Created: latency-svc-xdh4b Apr 20 16:19:18.325: INFO: Got endpoints: latency-svc-xdh4b [5.300256835s] Apr 20 16:19:18.330: INFO: Created: latency-svc-g7965 Apr 20 16:19:18.354: INFO: Got endpoints: latency-svc-g7965 [5.232832912s] Apr 20 16:19:18.914: INFO: Created: latency-svc-2ghhx Apr 20 16:19:19.283: INFO: Got endpoints: latency-svc-2ghhx [6.096358106s] Apr 20 16:19:19.683: INFO: Created: latency-svc-xvbvf Apr 20 16:19:19.815: INFO: Got endpoints: latency-svc-xvbvf [6.522372979s] Apr 20 16:19:20.238: INFO: Created: latency-svc-cxdj8 Apr 20 16:19:20.376: INFO: Got endpoints: latency-svc-cxdj8 [1.092960852s] Apr 20 16:19:20.685: INFO: Created: latency-svc-wfjx7 Apr 20 16:19:20.713: INFO: Created: latency-svc-67mk6 Apr 20 16:19:20.714: INFO: Got endpoints: latency-svc-wfjx7 [6.694248671s] Apr 20 16:19:20.769: INFO: Got endpoints: latency-svc-67mk6 [6.349873405s] Apr 20 16:19:20.836: INFO: Created: latency-svc-qgrtl Apr 20 16:19:20.881: INFO: Got endpoints: latency-svc-qgrtl [6.161537801s] Apr 20 16:19:21.712: INFO: Created: latency-svc-5fpvn Apr 20 16:19:21.845: INFO: Got endpoints: latency-svc-5fpvn [6.73048993s] Apr 20 16:19:22.043: INFO: Created: latency-svc-65qms Apr 20 16:19:22.084: INFO: Created: latency-svc-xq9l8 Apr 20 16:19:22.085: INFO: Got endpoints: latency-svc-65qms [6.092146435s] Apr 20 16:19:22.091: INFO: Got endpoints: latency-svc-xq9l8 [5.898491195s] Apr 20 16:19:22.116: INFO: Created: latency-svc-bzj9s Apr 20 16:19:22.138: INFO: Got endpoints: latency-svc-bzj9s [5.417537829s] Apr 20 16:19:22.573: INFO: Created: latency-svc-w5gjx Apr 20 16:19:22.708: INFO: Got endpoints: latency-svc-w5gjx [5.953212278s] Apr 20 16:19:22.966: INFO: Created: latency-svc-7tmrl Apr 20 16:19:23.015: INFO: Got endpoints: latency-svc-7tmrl [5.618422761s] Apr 20 16:19:23.016: INFO: Created: latency-svc-hjm6v Apr 20 16:19:23.056: INFO: Got endpoints: latency-svc-hjm6v [5.365902814s] Apr 20 16:19:23.091: INFO: Created: latency-svc-mrmcc Apr 20 16:19:23.114: INFO: Got endpoints: latency-svc-mrmcc [5.190807562s] Apr 20 16:19:23.183: INFO: Created: latency-svc-cgcpc Apr 20 16:19:23.301: INFO: Got endpoints: latency-svc-cgcpc [4.976275886s] Apr 20 16:19:24.049: INFO: Created: latency-svc-47h8n Apr 20 16:19:24.301: INFO: Got endpoints: latency-svc-47h8n [5.947527333s] Apr 20 16:19:24.523: INFO: Created: latency-svc-jdmdc Apr 20 16:19:24.598: INFO: Got endpoints: latency-svc-jdmdc [4.782909195s] Apr 20 16:19:24.598: INFO: Created: latency-svc-7cqwf Apr 20 16:19:24.611: INFO: Got endpoints: latency-svc-7cqwf [4.235240817s] Apr 20 16:19:24.667: INFO: Created: latency-svc-2llwj Apr 20 16:19:24.677: INFO: Got endpoints: latency-svc-2llwj [3.963315566s] Apr 20 16:19:24.720: INFO: Created: latency-svc-qhpn9 Apr 20 16:19:24.731: INFO: Got endpoints: latency-svc-qhpn9 [3.96251389s] Apr 20 16:19:24.834: INFO: Created: latency-svc-pfjbd Apr 20 16:19:24.839: INFO: Got endpoints: latency-svc-pfjbd [3.957639363s] Apr 20 16:19:24.883: INFO: Created: latency-svc-jkkkc Apr 20 16:19:24.906: INFO: Got endpoints: latency-svc-jkkkc [3.060336386s] Apr 20 16:19:24.989: INFO: Created: latency-svc-j8wfc Apr 20 16:19:24.995: INFO: Got endpoints: latency-svc-j8wfc [2.91011714s] Apr 20 16:19:25.035: INFO: Created: latency-svc-zmqn5 Apr 20 16:19:25.049: INFO: Got endpoints: latency-svc-zmqn5 [2.957736614s] Apr 20 16:19:25.140: INFO: Created: latency-svc-r2m8c Apr 20 16:19:25.192: INFO: Got endpoints: latency-svc-r2m8c [3.053652171s] Apr 20 16:19:25.835: INFO: Created: latency-svc-sljp8 Apr 20 16:19:26.091: INFO: Got endpoints: latency-svc-sljp8 [3.38282442s] Apr 20 16:19:26.093: INFO: Created: latency-svc-xfsbh Apr 20 16:19:26.109: INFO: Got endpoints: latency-svc-xfsbh [3.094216042s] Apr 20 16:19:26.174: INFO: Created: latency-svc-hg5tw Apr 20 16:19:26.217: INFO: Got endpoints: latency-svc-hg5tw [3.160553264s] Apr 20 16:19:26.245: INFO: Created: latency-svc-hqtmd Apr 20 16:19:26.258: INFO: Got endpoints: latency-svc-hqtmd [3.144071325s] Apr 20 16:19:27.069: INFO: Created: latency-svc-cl7df Apr 20 16:19:27.211: INFO: Got endpoints: latency-svc-cl7df [3.910018746s] Apr 20 16:19:27.228: INFO: Created: latency-svc-ktcjs Apr 20 16:19:27.279: INFO: Got endpoints: latency-svc-ktcjs [2.978066159s] Apr 20 16:19:27.280: INFO: Created: latency-svc-btr75 Apr 20 16:19:27.295: INFO: Got endpoints: latency-svc-btr75 [2.69607506s] Apr 20 16:19:28.060: INFO: Created: latency-svc-xzrnn Apr 20 16:19:28.079: INFO: Got endpoints: latency-svc-xzrnn [3.467542781s] Apr 20 16:19:28.325: INFO: Created: latency-svc-x89j4 Apr 20 16:19:28.364: INFO: Got endpoints: latency-svc-x89j4 [3.686565905s] Apr 20 16:19:28.364: INFO: Created: latency-svc-wbrww Apr 20 16:19:28.388: INFO: Created: latency-svc-jzbpj Apr 20 16:19:28.389: INFO: Got endpoints: latency-svc-wbrww [3.657245026s] Apr 20 16:19:28.484: INFO: Got endpoints: latency-svc-jzbpj [3.644826312s] Apr 20 16:19:28.540: INFO: Created: latency-svc-994pc Apr 20 16:19:28.558: INFO: Got endpoints: latency-svc-994pc [3.652540245s] Apr 20 16:19:28.618: INFO: Created: latency-svc-tmstb Apr 20 16:19:28.693: INFO: Got endpoints: latency-svc-tmstb [3.697930485s] Apr 20 16:19:28.694: INFO: Created: latency-svc-2sbpv Apr 20 16:19:28.779: INFO: Got endpoints: latency-svc-2sbpv [3.730172227s] Apr 20 16:19:28.795: INFO: Created: latency-svc-vvqjn Apr 20 16:19:28.828: INFO: Got endpoints: latency-svc-vvqjn [3.635741399s] Apr 20 16:19:28.971: INFO: Created: latency-svc-dgqc5 Apr 20 16:19:29.023: INFO: Got endpoints: latency-svc-dgqc5 [2.932203894s] Apr 20 16:19:29.024: INFO: Created: latency-svc-fddwg Apr 20 16:19:29.061: INFO: Got endpoints: latency-svc-fddwg [2.951812235s] Apr 20 16:19:29.121: INFO: Created: latency-svc-8d69l Apr 20 16:19:29.139: INFO: Got endpoints: latency-svc-8d69l [2.922539266s] Apr 20 16:19:29.197: INFO: Created: latency-svc-24jfm Apr 20 16:19:29.205: INFO: Got endpoints: latency-svc-24jfm [2.947198173s] Apr 20 16:19:29.271: INFO: Created: latency-svc-tjn8z Apr 20 16:19:29.275: INFO: Got endpoints: latency-svc-tjn8z [2.064023836s] Apr 20 16:19:29.325: INFO: Created: latency-svc-jn87m Apr 20 16:19:29.351: INFO: Got endpoints: latency-svc-jn87m [2.071154768s] Apr 20 16:19:29.390: INFO: Created: latency-svc-mpszx Apr 20 16:19:29.397: INFO: Got endpoints: latency-svc-mpszx [2.102168714s] Apr 20 16:19:29.415: INFO: Created: latency-svc-hfxmb Apr 20 16:19:29.421: INFO: Got endpoints: latency-svc-hfxmb [1.341800491s] Apr 20 16:19:29.454: INFO: Created: latency-svc-7zl82 Apr 20 16:19:29.489: INFO: Got endpoints: latency-svc-7zl82 [1.125038476s] Apr 20 16:19:29.535: INFO: Created: latency-svc-p7lc8 Apr 20 16:19:29.564: INFO: Created: latency-svc-76vrp Apr 20 16:19:29.564: INFO: Got endpoints: latency-svc-p7lc8 [1.175496492s] Apr 20 16:19:29.571: INFO: Got endpoints: latency-svc-76vrp [1.086513745s] Apr 20 16:19:29.600: INFO: Created: latency-svc-hclhj Apr 20 16:19:29.618: INFO: Got endpoints: latency-svc-hclhj [1.060052485s] Apr 20 16:19:29.691: INFO: Created: latency-svc-hzxxg Apr 20 16:19:29.696: INFO: Got endpoints: latency-svc-hzxxg [1.002631171s] Apr 20 16:19:29.739: INFO: Created: latency-svc-vd5w2 Apr 20 16:19:29.750: INFO: Got endpoints: latency-svc-vd5w2 [970.749434ms] Apr 20 16:19:29.769: INFO: Created: latency-svc-flmq2 Apr 20 16:19:29.797: INFO: Got endpoints: latency-svc-flmq2 [969.476275ms] Apr 20 16:19:29.830: INFO: Created: latency-svc-cg7c5 Apr 20 16:19:29.858: INFO: Got endpoints: latency-svc-cg7c5 [834.638726ms] Apr 20 16:19:29.942: INFO: Created: latency-svc-vw7kx Apr 20 16:19:29.987: INFO: Got endpoints: latency-svc-vw7kx [926.37303ms] Apr 20 16:19:29.988: INFO: Created: latency-svc-cvvz6 Apr 20 16:19:30.002: INFO: Got endpoints: latency-svc-cvvz6 [862.731162ms] Apr 20 16:19:30.073: INFO: Created: latency-svc-bvhcp Apr 20 16:19:30.098: INFO: Got endpoints: latency-svc-bvhcp [892.68082ms] Apr 20 16:19:30.147: INFO: Created: latency-svc-v4j2c Apr 20 16:19:30.211: INFO: Got endpoints: latency-svc-v4j2c [935.751349ms] Apr 20 16:19:30.250: INFO: Created: latency-svc-b6wzd Apr 20 16:19:30.277: INFO: Got endpoints: latency-svc-b6wzd [926.905531ms] Apr 20 16:19:30.298: INFO: Created: latency-svc-vlzfx Apr 20 16:19:30.301: INFO: Got endpoints: latency-svc-vlzfx [904.387498ms] Apr 20 16:19:30.355: INFO: Created: latency-svc-fw9g5 Apr 20 16:19:30.361: INFO: Got endpoints: latency-svc-fw9g5 [940.301267ms] Apr 20 16:19:30.403: INFO: Created: latency-svc-tjs94 Apr 20 16:19:30.415: INFO: Got endpoints: latency-svc-tjs94 [926.254706ms] Apr 20 16:19:30.481: INFO: Created: latency-svc-mfpfn Apr 20 16:19:30.516: INFO: Created: latency-svc-5txjx Apr 20 16:19:30.517: INFO: Got endpoints: latency-svc-mfpfn [952.579631ms] Apr 20 16:19:30.547: INFO: Got endpoints: latency-svc-5txjx [976.413289ms] Apr 20 16:19:30.618: INFO: Created: latency-svc-jxpfl Apr 20 16:19:30.639: INFO: Got endpoints: latency-svc-jxpfl [1.020774672s] Apr 20 16:19:30.640: INFO: Created: latency-svc-w88qz Apr 20 16:19:30.682: INFO: Got endpoints: latency-svc-w88qz [985.74505ms] Apr 20 16:19:30.714: INFO: Created: latency-svc-2z422 Apr 20 16:19:30.738: INFO: Got endpoints: latency-svc-2z422 [987.664715ms] Apr 20 16:19:30.778: INFO: Created: latency-svc-kfksv Apr 20 16:19:30.793: INFO: Got endpoints: latency-svc-kfksv [995.381321ms] Apr 20 16:19:30.821: INFO: Created: latency-svc-wkjhg Apr 20 16:19:30.829: INFO: Got endpoints: latency-svc-wkjhg [970.772382ms] Apr 20 16:19:30.852: INFO: Created: latency-svc-jfm5d Apr 20 16:19:30.859: INFO: Got endpoints: latency-svc-jfm5d [871.521505ms] Apr 20 16:19:30.882: INFO: Created: latency-svc-vhxxd Apr 20 16:19:30.894: INFO: Got endpoints: latency-svc-vhxxd [892.291131ms] Apr 20 16:19:31.014: INFO: Created: latency-svc-sqxwx Apr 20 16:19:31.059: INFO: Got endpoints: latency-svc-sqxwx [960.658454ms] Apr 20 16:19:31.061: INFO: Created: latency-svc-2f5p9 Apr 20 16:19:31.080: INFO: Got endpoints: latency-svc-2f5p9 [869.118107ms] Apr 20 16:19:31.175: INFO: Created: latency-svc-xkx99 Apr 20 16:19:31.254: INFO: Created: latency-svc-vd27c Apr 20 16:19:31.254: INFO: Got endpoints: latency-svc-xkx99 [976.227109ms] Apr 20 16:19:31.319: INFO: Got endpoints: latency-svc-vd27c [1.017535973s] Apr 20 16:19:31.325: INFO: Created: latency-svc-pqwb6 Apr 20 16:19:31.338: INFO: Got endpoints: latency-svc-pqwb6 [976.340106ms] Apr 20 16:19:31.897: INFO: Created: latency-svc-nqr6x Apr 20 16:19:31.955: INFO: Got endpoints: latency-svc-nqr6x [1.539433387s] Apr 20 16:19:32.655: INFO: Created: latency-svc-2vtvk Apr 20 16:19:32.786: INFO: Got endpoints: latency-svc-2vtvk [2.269133229s] Apr 20 16:19:33.279: INFO: Created: latency-svc-fhgtf Apr 20 16:19:34.001: INFO: Got endpoints: latency-svc-fhgtf [3.453703835s] Apr 20 16:19:35.185: INFO: Created: latency-svc-lhgr4 Apr 20 16:19:35.236: INFO: Got endpoints: latency-svc-lhgr4 [4.597128702s] Apr 20 16:19:35.966: INFO: Created: latency-svc-mvsnq Apr 20 16:19:36.003: INFO: Got endpoints: latency-svc-mvsnq [5.321115043s] Apr 20 16:19:37.139: INFO: Created: latency-svc-5rf7f Apr 20 16:19:37.230: INFO: Got endpoints: latency-svc-5rf7f [6.492383877s] Apr 20 16:19:37.677: INFO: Created: latency-svc-mrzfs Apr 20 16:19:37.917: INFO: Got endpoints: latency-svc-mrzfs [7.124274118s] Apr 20 16:19:38.186: INFO: Created: latency-svc-5sx8c Apr 20 16:19:41.029: INFO: Got endpoints: latency-svc-5sx8c [10.200025108s] Apr 20 16:19:41.973: INFO: Created: latency-svc-fqd5h Apr 20 16:19:42.337: INFO: Got endpoints: latency-svc-fqd5h [11.478046622s] Apr 20 16:19:43.104: INFO: Created: latency-svc-xhx2x Apr 20 16:19:43.392: INFO: Got endpoints: latency-svc-xhx2x [12.497779615s] Apr 20 16:19:43.394: INFO: Created: latency-svc-z9p28 Apr 20 16:19:43.565: INFO: Got endpoints: latency-svc-z9p28 [12.505923627s] Apr 20 16:19:43.790: INFO: Created: latency-svc-d6x29 Apr 20 16:19:44.394: INFO: Got endpoints: latency-svc-d6x29 [13.314023304s] Apr 20 16:19:44.805: INFO: Created: latency-svc-vp9vm Apr 20 16:19:44.885: INFO: Got endpoints: latency-svc-vp9vm [13.631096173s] Apr 20 16:19:44.966: INFO: Created: latency-svc-rjwps Apr 20 16:19:44.973: INFO: Got endpoints: latency-svc-rjwps [13.653899871s] Apr 20 16:19:45.889: INFO: Created: latency-svc-fx2zp Apr 20 16:19:46.115: INFO: Got endpoints: latency-svc-fx2zp [14.777522096s] Apr 20 16:19:46.201: INFO: Created: latency-svc-btrrc Apr 20 16:19:46.295: INFO: Got endpoints: latency-svc-btrrc [14.339813337s] Apr 20 16:19:46.297: INFO: Created: latency-svc-ghpdf Apr 20 16:19:46.393: INFO: Got endpoints: latency-svc-ghpdf [13.60661102s] Apr 20 16:19:46.491: INFO: Created: latency-svc-vlfkx Apr 20 16:19:46.525: INFO: Got endpoints: latency-svc-vlfkx [12.524013904s] Apr 20 16:19:46.637: INFO: Created: latency-svc-vprk7 Apr 20 16:19:46.686: INFO: Got endpoints: latency-svc-vprk7 [11.450116934s] Apr 20 16:19:46.721: INFO: Created: latency-svc-fl4mv Apr 20 16:19:46.780: INFO: Got endpoints: latency-svc-fl4mv [10.777291083s] Apr 20 16:19:46.842: INFO: Created: latency-svc-lwrsh Apr 20 16:19:46.877: INFO: Got endpoints: latency-svc-lwrsh [9.647149677s] Apr 20 16:19:47.012: INFO: Created: latency-svc-8qqkr Apr 20 16:19:47.034: INFO: Got endpoints: latency-svc-8qqkr [9.117097878s] Apr 20 16:19:47.164: INFO: Created: latency-svc-g7kc7 Apr 20 16:19:47.396: INFO: Got endpoints: latency-svc-g7kc7 [6.367099578s] Apr 20 16:19:47.991: INFO: Created: latency-svc-9srbp Apr 20 16:19:48.151: INFO: Got endpoints: latency-svc-9srbp [5.81449505s] Apr 20 16:19:48.838: INFO: Created: latency-svc-7ntcw Apr 20 16:19:48.854: INFO: Got endpoints: latency-svc-7ntcw [5.461770825s] Apr 20 16:19:49.102: INFO: Created: latency-svc-2d4h6 Apr 20 16:19:49.622: INFO: Got endpoints: latency-svc-2d4h6 [6.057277547s] Apr 20 16:19:49.769: INFO: Created: latency-svc-lmg8r Apr 20 16:19:49.801: INFO: Got endpoints: latency-svc-lmg8r [5.406360198s] Apr 20 16:19:50.864: INFO: Created: latency-svc-rmxkk Apr 20 16:19:50.932: INFO: Got endpoints: latency-svc-rmxkk [6.047153436s] Apr 20 16:19:51.056: INFO: Created: latency-svc-p9stc Apr 20 16:19:51.863: INFO: Got endpoints: latency-svc-p9stc [6.890384973s] Apr 20 16:19:51.864: INFO: Created: latency-svc-w9hft Apr 20 16:19:52.087: INFO: Got endpoints: latency-svc-w9hft [5.97218653s] Apr 20 16:19:52.343: INFO: Created: latency-svc-597tv Apr 20 16:19:52.351: INFO: Got endpoints: latency-svc-597tv [6.056208278s] Apr 20 16:19:52.407: INFO: Created: latency-svc-ntcqn Apr 20 16:19:52.616: INFO: Got endpoints: latency-svc-ntcqn [6.223874127s] Apr 20 16:19:52.666: INFO: Created: latency-svc-4dpkp Apr 20 16:19:52.687: INFO: Got endpoints: latency-svc-4dpkp [6.161505097s] Apr 20 16:19:53.348: INFO: Created: latency-svc-trqdj Apr 20 16:19:53.357: INFO: Got endpoints: latency-svc-trqdj [6.670667445s] Apr 20 16:19:53.547: INFO: Created: latency-svc-lbwfg Apr 20 16:19:54.033: INFO: Got endpoints: latency-svc-lbwfg [7.252907023s] Apr 20 16:19:54.034: INFO: Created: latency-svc-8ph9w Apr 20 16:19:54.070: INFO: Got endpoints: latency-svc-8ph9w [7.192592425s] Apr 20 16:19:54.303: INFO: Created: latency-svc-ht92c Apr 20 16:19:54.357: INFO: Got endpoints: latency-svc-ht92c [7.32306128s] Apr 20 16:19:54.729: INFO: Created: latency-svc-ftr9q Apr 20 16:19:54.894: INFO: Got endpoints: latency-svc-ftr9q [7.498454148s] Apr 20 16:19:55.657: INFO: Created: latency-svc-hl865 Apr 20 16:19:55.787: INFO: Got endpoints: latency-svc-hl865 [7.635599227s] Apr 20 16:19:55.997: INFO: Created: latency-svc-tknjw Apr 20 16:19:56.175: INFO: Got endpoints: latency-svc-tknjw [7.321493879s] Apr 20 16:19:56.176: INFO: Created: latency-svc-x5kgv Apr 20 16:19:56.214: INFO: Got endpoints: latency-svc-x5kgv [6.59193602s] Apr 20 16:19:56.742: INFO: Created: latency-svc-p68tj Apr 20 16:19:56.783: INFO: Got endpoints: latency-svc-p68tj [6.982176136s] Apr 20 16:19:57.128: INFO: Created: latency-svc-m8g64 Apr 20 16:19:57.441: INFO: Got endpoints: latency-svc-m8g64 [6.509007731s] Apr 20 16:19:57.441: INFO: Created: latency-svc-x46t8 Apr 20 16:19:57.619: INFO: Got endpoints: latency-svc-x46t8 [5.755621863s] Apr 20 16:19:57.899: INFO: Created: latency-svc-6qmbd Apr 20 16:19:57.915: INFO: Got endpoints: latency-svc-6qmbd [5.827453764s] Apr 20 16:19:58.146: INFO: Created: latency-svc-krkfv Apr 20 16:19:58.184: INFO: Got endpoints: latency-svc-krkfv [5.83262488s] Apr 20 16:19:58.812: INFO: Created: latency-svc-q59vw Apr 20 16:19:58.843: INFO: Got endpoints: latency-svc-q59vw [6.226144295s] Apr 20 16:19:59.203: INFO: Created: latency-svc-8pxzk Apr 20 16:19:59.215: INFO: Got endpoints: latency-svc-8pxzk [6.527992125s] Apr 20 16:19:59.277: INFO: Created: latency-svc-c88wb Apr 20 16:19:59.487: INFO: Got endpoints: latency-svc-c88wb [6.130097341s] Apr 20 16:19:59.501: INFO: Created: latency-svc-jk529 Apr 20 16:19:59.520: INFO: Got endpoints: latency-svc-jk529 [5.487325107s] Apr 20 16:19:59.520: INFO: Latencies: [166.690665ms 208.817041ms 343.247846ms 381.051856ms 521.597155ms 577.865849ms 622.950407ms 643.217748ms 674.179614ms 773.109056ms 781.633752ms 799.962925ms 808.580172ms 808.904448ms 834.638726ms 835.369395ms 853.8301ms 862.731162ms 862.810828ms 865.819761ms 869.118107ms 871.042159ms 871.521505ms 892.291131ms 892.68082ms 904.387498ms 926.254706ms 926.37303ms 926.905531ms 931.519096ms 935.751349ms 940.301267ms 952.579631ms 956.32336ms 960.658454ms 969.476275ms 970.749434ms 970.772382ms 976.227109ms 976.340106ms 976.413289ms 985.400042ms 985.74505ms 987.664715ms 992.882465ms 995.381321ms 1.002631171s 1.017535973s 1.020774672s 1.024321757s 1.060052485s 1.086513745s 1.092960852s 1.125038476s 1.175496492s 1.341800491s 1.539433387s 1.572407708s 2.064023836s 2.071154768s 2.102168714s 2.269133229s 2.69607506s 2.75123934s 2.91011714s 2.922539266s 2.932203894s 2.947198173s 2.951812235s 2.957736614s 2.978066159s 3.053652171s 3.060336386s 3.094216042s 3.144071325s 3.160553264s 3.38282442s 3.453703835s 3.467542781s 3.635741399s 3.644826312s 3.652540245s 3.657245026s 3.686565905s 3.697930485s 3.730172227s 3.910018746s 3.957639363s 3.96251389s 3.963315566s 4.235240817s 4.404814894s 4.597128702s 4.772530346s 4.782909195s 4.928010367s 4.942656124s 4.976275886s 5.05345128s 5.068084061s 5.10137715s 5.110572815s 5.190807562s 5.232832912s 5.300256835s 5.321115043s 5.365902814s 5.406360198s 5.417537829s 5.461770825s 5.487325107s 5.592858324s 5.618422761s 5.646497623s 5.659972167s 5.695056404s 5.71885038s 5.755621863s 5.81449505s 5.827453764s 5.83262488s 5.898491195s 5.947527333s 5.953212278s 5.97218653s 6.047153436s 6.056208278s 6.057277547s 6.092146435s 6.096358106s 6.130097341s 6.151421501s 6.161505097s 6.161537801s 6.223874127s 6.226144295s 6.349873405s 6.367099578s 6.385120572s 6.472471193s 6.492383877s 6.509007731s 6.522372979s 6.527992125s 6.59193602s 6.670667445s 6.694248671s 6.73048993s 6.890384973s 6.912012162s 6.982176136s 7.124274118s 7.192592425s 7.252907023s 7.321493879s 7.32306128s 7.498454148s 7.635599227s 7.908956454s 8.146811026s 8.391633428s 8.393945162s 9.055008689s 9.117097878s 9.418966493s 9.518926398s 9.647149677s 9.783073593s 10.200025108s 10.359685354s 10.777291083s 11.101587194s 11.186144667s 11.450116934s 11.478046622s 12.497779615s 12.505923627s 12.524013904s 13.314023304s 13.585653019s 13.60661102s 13.631096173s 13.653899871s 14.046155268s 14.339813337s 14.777522096s 16.357083584s 17.146570604s 17.651849294s 18.959503682s 20.203942876s 20.583919489s 20.842981753s 21.335507018s 21.426491749s 21.876010061s 21.879284262s 22.117541289s 22.476278529s 22.829479819s] Apr 20 16:19:59.521: INFO: 50 %ile: 5.10137715s Apr 20 16:19:59.521: INFO: 90 %ile: 13.60661102s Apr 20 16:19:59.521: INFO: 99 %ile: 22.476278529s Apr 20 16:19:59.521: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:19:59.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3845" for this suite. Apr 20 16:22:01.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:22:02.064: INFO: namespace svc-latency-3845 deletion completed in 2m2.507762273s • [SLOW TEST:210.920 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:22:02.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 20 16:22:02.162: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7ecd918-303f-47b9-a092-913b88236f7a" in namespace "projected-9221" to be "success or failure" Apr 20 16:22:02.181: INFO: Pod "downwardapi-volume-a7ecd918-303f-47b9-a092-913b88236f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.285358ms Apr 20 16:22:04.184: INFO: Pod "downwardapi-volume-a7ecd918-303f-47b9-a092-913b88236f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022872401s Apr 20 16:22:06.188: INFO: Pod "downwardapi-volume-a7ecd918-303f-47b9-a092-913b88236f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026777626s Apr 20 16:22:08.226: INFO: Pod "downwardapi-volume-a7ecd918-303f-47b9-a092-913b88236f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06403272s Apr 20 16:22:10.573: INFO: Pod "downwardapi-volume-a7ecd918-303f-47b9-a092-913b88236f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.4112793s Apr 20 16:22:12.713: INFO: Pod "downwardapi-volume-a7ecd918-303f-47b9-a092-913b88236f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.551826006s Apr 20 16:22:14.717: INFO: Pod "downwardapi-volume-a7ecd918-303f-47b9-a092-913b88236f7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.555143543s STEP: Saw pod success Apr 20 16:22:14.717: INFO: Pod "downwardapi-volume-a7ecd918-303f-47b9-a092-913b88236f7a" satisfied condition "success or failure" Apr 20 16:22:14.719: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a7ecd918-303f-47b9-a092-913b88236f7a container client-container: STEP: delete the pod Apr 20 16:22:15.457: INFO: Waiting for pod downwardapi-volume-a7ecd918-303f-47b9-a092-913b88236f7a to disappear Apr 20 16:22:15.503: INFO: Pod downwardapi-volume-a7ecd918-303f-47b9-a092-913b88236f7a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:22:15.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9221" for this suite. Apr 20 16:22:21.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:22:21.695: INFO: namespace projected-9221 deletion completed in 6.186957614s • [SLOW TEST:19.631 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:22:21.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:22:29.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3761" for this suite. Apr 20 16:22:35.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:22:35.877: INFO: namespace kubelet-test-3761 deletion completed in 6.077987498s • [SLOW TEST:14.182 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:22:35.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 20 16:22:35.926: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:22:50.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5091" for this suite. Apr 20 16:23:44.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:23:44.124: INFO: namespace pods-5091 deletion completed in 54.07871396s • [SLOW TEST:68.247 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:23:44.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 20 16:23:52.783: INFO: Successfully updated pod "pod-update-d3b787df-cae9-4674-b49f-a7144917173d" STEP: verifying the updated pod is in kubernetes Apr 20 16:23:52.872: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:23:52.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2240" for this suite. Apr 20 16:24:14.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:24:14.986: INFO: namespace pods-2240 deletion completed in 22.111920846s • [SLOW TEST:30.862 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:24:14.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-a62efce7-94b7-490e-8c3b-8b7488e0e2d5 STEP: Creating a pod to test consume configMaps Apr 20 16:24:15.100: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d62cc64f-7ea2-40c2-bc35-921c3bd5111f" in namespace "projected-7400" to be "success or failure" Apr 20 16:24:15.109: INFO: Pod "pod-projected-configmaps-d62cc64f-7ea2-40c2-bc35-921c3bd5111f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.852831ms Apr 20 16:24:17.112: INFO: Pod "pod-projected-configmaps-d62cc64f-7ea2-40c2-bc35-921c3bd5111f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012157324s Apr 20 16:24:19.115: INFO: Pod "pod-projected-configmaps-d62cc64f-7ea2-40c2-bc35-921c3bd5111f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015190671s Apr 20 16:24:21.515: INFO: Pod "pod-projected-configmaps-d62cc64f-7ea2-40c2-bc35-921c3bd5111f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.415216862s Apr 20 16:24:23.518: INFO: Pod "pod-projected-configmaps-d62cc64f-7ea2-40c2-bc35-921c3bd5111f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.417910755s Apr 20 16:24:25.521: INFO: Pod "pod-projected-configmaps-d62cc64f-7ea2-40c2-bc35-921c3bd5111f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.420484577s STEP: Saw pod success Apr 20 16:24:25.521: INFO: Pod "pod-projected-configmaps-d62cc64f-7ea2-40c2-bc35-921c3bd5111f" satisfied condition "success or failure" Apr 20 16:24:25.523: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-d62cc64f-7ea2-40c2-bc35-921c3bd5111f container projected-configmap-volume-test: STEP: delete the pod Apr 20 16:24:25.555: INFO: Waiting for pod pod-projected-configmaps-d62cc64f-7ea2-40c2-bc35-921c3bd5111f to disappear Apr 20 16:24:25.571: INFO: Pod pod-projected-configmaps-d62cc64f-7ea2-40c2-bc35-921c3bd5111f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:24:25.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7400" for this suite. Apr 20 16:24:31.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:24:31.670: INFO: namespace projected-7400 deletion completed in 6.096206017s • [SLOW TEST:16.683 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:24:31.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-dde5c13c-96d3-4e14-a0eb-f15d1d299f9a in namespace container-probe-1371 Apr 20 16:24:39.220: INFO: Started pod liveness-dde5c13c-96d3-4e14-a0eb-f15d1d299f9a in namespace container-probe-1371 STEP: checking the pod's current state and verifying that restartCount is present Apr 20 16:24:39.222: INFO: Initial restart count of pod liveness-dde5c13c-96d3-4e14-a0eb-f15d1d299f9a is 0 Apr 20 16:25:05.362: INFO: Restart count of pod container-probe-1371/liveness-dde5c13c-96d3-4e14-a0eb-f15d1d299f9a is now 1 (26.140152736s elapsed) Apr 20 16:25:21.654: INFO: Restart count of pod container-probe-1371/liveness-dde5c13c-96d3-4e14-a0eb-f15d1d299f9a is now 2 (42.431854363s elapsed) Apr 20 16:25:39.816: INFO: Restart count of pod container-probe-1371/liveness-dde5c13c-96d3-4e14-a0eb-f15d1d299f9a is now 3 (1m0.59332181s elapsed) Apr 20 16:26:04.894: INFO: Restart count of pod container-probe-1371/liveness-dde5c13c-96d3-4e14-a0eb-f15d1d299f9a is now 4 (1m25.671613463s elapsed) Apr 20 16:27:01.481: INFO: Restart count of pod container-probe-1371/liveness-dde5c13c-96d3-4e14-a0eb-f15d1d299f9a is now 5 (2m22.258860526s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:27:01.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1371" for this suite. Apr 20 16:27:07.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:27:07.643: INFO: namespace container-probe-1371 deletion completed in 6.118669718s • [SLOW TEST:155.973 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:27:07.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Apr 20 16:27:07.829: INFO: Waiting up to 5m0s for pod "client-containers-24ff7cfe-97d5-4b5d-8fb9-8a75bfd86b70" in namespace "containers-7749" to be "success or failure" Apr 20 16:27:07.870: INFO: Pod "client-containers-24ff7cfe-97d5-4b5d-8fb9-8a75bfd86b70": Phase="Pending", Reason="", readiness=false. Elapsed: 41.152516ms Apr 20 16:27:09.873: INFO: Pod "client-containers-24ff7cfe-97d5-4b5d-8fb9-8a75bfd86b70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044376813s Apr 20 16:27:11.991: INFO: Pod "client-containers-24ff7cfe-97d5-4b5d-8fb9-8a75bfd86b70": Phase="Running", Reason="", readiness=true. Elapsed: 4.161702091s Apr 20 16:27:13.994: INFO: Pod "client-containers-24ff7cfe-97d5-4b5d-8fb9-8a75bfd86b70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.165312312s STEP: Saw pod success Apr 20 16:27:13.994: INFO: Pod "client-containers-24ff7cfe-97d5-4b5d-8fb9-8a75bfd86b70" satisfied condition "success or failure" Apr 20 16:27:13.996: INFO: Trying to get logs from node iruya-worker2 pod client-containers-24ff7cfe-97d5-4b5d-8fb9-8a75bfd86b70 container test-container: STEP: delete the pod Apr 20 16:27:14.023: INFO: Waiting for pod client-containers-24ff7cfe-97d5-4b5d-8fb9-8a75bfd86b70 to disappear Apr 20 16:27:14.038: INFO: Pod client-containers-24ff7cfe-97d5-4b5d-8fb9-8a75bfd86b70 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:27:14.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7749" for this suite. Apr 20 16:27:20.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:27:20.468: INFO: namespace containers-7749 deletion completed in 6.427349947s • [SLOW TEST:12.825 seconds] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:27:20.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-17cb429a-6034-4399-8bbb-3f100615a1ff STEP: Creating a pod to test consume secrets Apr 20 16:27:20.550: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cc8198d5-570f-4757-9c22-90e993a440c0" in namespace "projected-3582" to be "success or failure" Apr 20 16:27:20.565: INFO: Pod "pod-projected-secrets-cc8198d5-570f-4757-9c22-90e993a440c0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.944281ms Apr 20 16:27:22.632: INFO: Pod "pod-projected-secrets-cc8198d5-570f-4757-9c22-90e993a440c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081862052s Apr 20 16:27:24.636: INFO: Pod "pod-projected-secrets-cc8198d5-570f-4757-9c22-90e993a440c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085622664s Apr 20 16:27:26.639: INFO: Pod "pod-projected-secrets-cc8198d5-570f-4757-9c22-90e993a440c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.089100097s STEP: Saw pod success Apr 20 16:27:26.639: INFO: Pod "pod-projected-secrets-cc8198d5-570f-4757-9c22-90e993a440c0" satisfied condition "success or failure" Apr 20 16:27:26.642: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-cc8198d5-570f-4757-9c22-90e993a440c0 container projected-secret-volume-test: STEP: delete the pod Apr 20 16:27:26.658: INFO: Waiting for pod pod-projected-secrets-cc8198d5-570f-4757-9c22-90e993a440c0 to disappear Apr 20 16:27:26.663: INFO: Pod pod-projected-secrets-cc8198d5-570f-4757-9c22-90e993a440c0 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:27:26.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3582" for this suite. Apr 20 16:27:32.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:27:32.759: INFO: namespace projected-3582 deletion completed in 6.093039554s • [SLOW TEST:12.290 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:27:32.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 20 16:27:36.847: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-dc7dd2d0-97d9-45d3-b0e6-d534aed77acd,GenerateName:,Namespace:events-3211,SelfLink:/api/v1/namespaces/events-3211/pods/send-events-dc7dd2d0-97d9-45d3-b0e6-d534aed77acd,UID:c333bfcf-6f33-44f5-89b0-a9ee62de2ac3,ResourceVersion:1296501,Generation:0,CreationTimestamp:2021-04-20 16:27:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 826309374,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pc2gj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pc2gj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-pc2gj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00258bb20} {node.kubernetes.io/unreachable Exists NoExecute 0xc00258bb40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 16:27:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 16:27:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 16:27:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 16:27:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.208,StartTime:2021-04-20 16:27:32 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2021-04-20 16:27:35 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://b5b80568f69182a72cd0309fe0e72962a713a130caf9ef81d6c462491f90084b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Apr 20 16:27:38.853: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 20 16:27:40.858: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:27:40.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3211" for this suite. Apr 20 16:28:18.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:28:19.021: INFO: namespace events-3211 deletion completed in 38.135154745s • [SLOW TEST:46.262 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:28:19.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 20 16:28:19.122: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1471,SelfLink:/api/v1/namespaces/watch-1471/configmaps/e2e-watch-test-label-changed,UID:1fcb2e04-938d-4c73-bbd0-5be6fc6e0516,ResourceVersion:1296605,Generation:0,CreationTimestamp:2021-04-20 16:28:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 20 16:28:19.122: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1471,SelfLink:/api/v1/namespaces/watch-1471/configmaps/e2e-watch-test-label-changed,UID:1fcb2e04-938d-4c73-bbd0-5be6fc6e0516,ResourceVersion:1296606,Generation:0,CreationTimestamp:2021-04-20 16:28:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 20 16:28:19.122: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1471,SelfLink:/api/v1/namespaces/watch-1471/configmaps/e2e-watch-test-label-changed,UID:1fcb2e04-938d-4c73-bbd0-5be6fc6e0516,ResourceVersion:1296607,Generation:0,CreationTimestamp:2021-04-20 16:28:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 20 16:28:29.175: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1471,SelfLink:/api/v1/namespaces/watch-1471/configmaps/e2e-watch-test-label-changed,UID:1fcb2e04-938d-4c73-bbd0-5be6fc6e0516,ResourceVersion:1296628,Generation:0,CreationTimestamp:2021-04-20 16:28:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 20 16:28:29.176: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1471,SelfLink:/api/v1/namespaces/watch-1471/configmaps/e2e-watch-test-label-changed,UID:1fcb2e04-938d-4c73-bbd0-5be6fc6e0516,ResourceVersion:1296629,Generation:0,CreationTimestamp:2021-04-20 16:28:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 20 16:28:29.176: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1471,SelfLink:/api/v1/namespaces/watch-1471/configmaps/e2e-watch-test-label-changed,UID:1fcb2e04-938d-4c73-bbd0-5be6fc6e0516,ResourceVersion:1296630,Generation:0,CreationTimestamp:2021-04-20 16:28:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:28:29.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1471" for this suite. Apr 20 16:28:35.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:28:35.619: INFO: namespace watch-1471 deletion completed in 6.27183277s • [SLOW TEST:16.598 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:28:35.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-4dda13ec-ef18-43c0-af98-e1f23b7a80a5 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:28:35.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4310" for this suite. Apr 20 16:28:41.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:28:41.781: INFO: namespace configmap-4310 deletion completed in 6.10586106s • [SLOW TEST:6.162 seconds] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:28:41.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Apr 20 16:28:48.430: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7450 pod-service-account-9e53e5ec-bae2-47ee-bcd0-569636629654 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 20 16:28:51.455: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7450 pod-service-account-9e53e5ec-bae2-47ee-bcd0-569636629654 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 20 16:28:51.679: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7450 pod-service-account-9e53e5ec-bae2-47ee-bcd0-569636629654 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:28:51.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7450" for this suite. Apr 20 16:28:57.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:28:58.026: INFO: namespace svcaccounts-7450 deletion completed in 6.109067363s • [SLOW TEST:16.244 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:28:58.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-0a199538-f203-4c6c-ab05-e819564f2e3a STEP: Creating a pod to test consume configMaps Apr 20 16:28:58.110: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cfd7b80d-97e3-4ee7-83a1-b668e5077090" in namespace "projected-635" to be "success or failure" Apr 20 16:28:58.159: INFO: Pod "pod-projected-configmaps-cfd7b80d-97e3-4ee7-83a1-b668e5077090": Phase="Pending", Reason="", readiness=false. Elapsed: 48.420233ms Apr 20 16:29:00.162: INFO: Pod "pod-projected-configmaps-cfd7b80d-97e3-4ee7-83a1-b668e5077090": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051443961s Apr 20 16:29:02.170: INFO: Pod "pod-projected-configmaps-cfd7b80d-97e3-4ee7-83a1-b668e5077090": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05945108s STEP: Saw pod success Apr 20 16:29:02.170: INFO: Pod "pod-projected-configmaps-cfd7b80d-97e3-4ee7-83a1-b668e5077090" satisfied condition "success or failure" Apr 20 16:29:02.173: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-cfd7b80d-97e3-4ee7-83a1-b668e5077090 container projected-configmap-volume-test: STEP: delete the pod Apr 20 16:29:02.264: INFO: Waiting for pod pod-projected-configmaps-cfd7b80d-97e3-4ee7-83a1-b668e5077090 to disappear Apr 20 16:29:02.382: INFO: Pod pod-projected-configmaps-cfd7b80d-97e3-4ee7-83a1-b668e5077090 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:29:02.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-635" for this suite. Apr 20 16:29:08.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:29:08.555: INFO: namespace projected-635 deletion completed in 6.169104972s • [SLOW TEST:10.529 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:29:08.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 20 16:29:08.683: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:29:08.695: INFO: Number of nodes with available pods: 0 Apr 20 16:29:08.695: INFO: Node iruya-worker is running more than one daemon pod Apr 20 16:29:09.700: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:29:09.704: INFO: Number of nodes with available pods: 0 Apr 20 16:29:09.704: INFO: Node iruya-worker is running more than one daemon pod Apr 20 16:29:10.857: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:29:10.861: INFO: Number of nodes with available pods: 0 Apr 20 16:29:10.861: INFO: Node iruya-worker is running more than one daemon pod Apr 20 16:29:11.874: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:29:11.876: INFO: Number of nodes with available pods: 0 Apr 20 16:29:11.876: INFO: Node iruya-worker is running more than one daemon pod Apr 20 16:29:12.700: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:29:12.704: INFO: Number of nodes with available pods: 1 Apr 20 16:29:12.704: INFO: Node iruya-worker2 is running more than one daemon pod Apr 20 16:29:13.700: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:29:13.703: INFO: Number of nodes with available pods: 2 Apr 20 16:29:13.703: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 20 16:29:13.725: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:29:13.745: INFO: Number of nodes with available pods: 1 Apr 20 16:29:13.745: INFO: Node iruya-worker2 is running more than one daemon pod Apr 20 16:29:14.750: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:29:14.754: INFO: Number of nodes with available pods: 1 Apr 20 16:29:14.754: INFO: Node iruya-worker2 is running more than one daemon pod Apr 20 16:29:15.751: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:29:15.754: INFO: Number of nodes with available pods: 1 Apr 20 16:29:15.754: INFO: Node iruya-worker2 is running more than one daemon pod Apr 20 16:29:16.750: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:29:16.754: INFO: Number of nodes with available pods: 1 Apr 20 16:29:16.754: INFO: Node iruya-worker2 is running more than one daemon pod Apr 20 16:29:17.751: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 20 16:29:17.754: INFO: Number of nodes with available pods: 2 Apr 20 16:29:17.754: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9133, will wait for the garbage collector to delete the pods Apr 20 16:29:17.819: INFO: Deleting DaemonSet.extensions daemon-set took: 6.305992ms Apr 20 16:29:18.120: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.408415ms Apr 20 16:29:29.244: INFO: Number of nodes with available pods: 0 Apr 20 16:29:29.244: INFO: Number of running nodes: 0, number of available pods: 0 Apr 20 16:29:29.246: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9133/daemonsets","resourceVersion":"1296868"},"items":null} Apr 20 16:29:29.249: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9133/pods","resourceVersion":"1296868"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:29:29.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9133" for this suite. Apr 20 16:29:35.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:29:35.360: INFO: namespace daemonsets-9133 deletion completed in 6.101176162s • [SLOW TEST:26.805 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:29:35.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 20 16:29:36.003: INFO: Waiting up to 5m0s for pod "downwardapi-volume-def021e2-bede-4bba-b46e-f277604e75cd" in namespace "downward-api-5829" to be "success or failure" Apr 20 16:29:36.061: INFO: Pod "downwardapi-volume-def021e2-bede-4bba-b46e-f277604e75cd": Phase="Pending", Reason="", readiness=false. Elapsed: 57.78005ms Apr 20 16:29:38.064: INFO: Pod "downwardapi-volume-def021e2-bede-4bba-b46e-f277604e75cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061643137s Apr 20 16:29:40.069: INFO: Pod "downwardapi-volume-def021e2-bede-4bba-b46e-f277604e75cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065856039s STEP: Saw pod success Apr 20 16:29:40.069: INFO: Pod "downwardapi-volume-def021e2-bede-4bba-b46e-f277604e75cd" satisfied condition "success or failure" Apr 20 16:29:40.071: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-def021e2-bede-4bba-b46e-f277604e75cd container client-container: STEP: delete the pod Apr 20 16:29:40.117: INFO: Waiting for pod downwardapi-volume-def021e2-bede-4bba-b46e-f277604e75cd to disappear Apr 20 16:29:40.127: INFO: Pod downwardapi-volume-def021e2-bede-4bba-b46e-f277604e75cd no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:29:40.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5829" for this suite. Apr 20 16:29:46.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:29:46.206: INFO: namespace downward-api-5829 deletion completed in 6.075397769s • [SLOW TEST:10.845 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:29:46.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 20 16:29:46.260: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8948,SelfLink:/api/v1/namespaces/watch-8948/configmaps/e2e-watch-test-watch-closed,UID:7560294c-7814-4a73-ba9e-6f7647517ecd,ResourceVersion:1296954,Generation:0,CreationTimestamp:2021-04-20 16:29:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 20 16:29:46.260: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8948,SelfLink:/api/v1/namespaces/watch-8948/configmaps/e2e-watch-test-watch-closed,UID:7560294c-7814-4a73-ba9e-6f7647517ecd,ResourceVersion:1296955,Generation:0,CreationTimestamp:2021-04-20 16:29:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 20 16:29:46.277: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8948,SelfLink:/api/v1/namespaces/watch-8948/configmaps/e2e-watch-test-watch-closed,UID:7560294c-7814-4a73-ba9e-6f7647517ecd,ResourceVersion:1296956,Generation:0,CreationTimestamp:2021-04-20 16:29:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 20 16:29:46.277: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8948,SelfLink:/api/v1/namespaces/watch-8948/configmaps/e2e-watch-test-watch-closed,UID:7560294c-7814-4a73-ba9e-6f7647517ecd,ResourceVersion:1296957,Generation:0,CreationTimestamp:2021-04-20 16:29:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:29:46.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8948" for this suite. Apr 20 16:29:52.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:29:52.415: INFO: namespace watch-8948 deletion completed in 6.104346146s • [SLOW TEST:6.209 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:29:52.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 20 16:29:52.465: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 20 16:29:53.567: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:29:54.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3371" for this suite. Apr 20 16:30:00.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:30:01.048: INFO: namespace replication-controller-3371 deletion completed in 6.378904771s • [SLOW TEST:8.633 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:30:01.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 20 16:30:05.972: INFO: Successfully updated pod "annotationupdate91d695e8-6bb5-4dd6-9916-59ceb7d97a22" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:30:07.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5343" for this suite. Apr 20 16:30:30.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:30:30.587: INFO: namespace downward-api-5343 deletion completed in 22.593022864s • [SLOW TEST:29.539 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:30:30.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 20 16:30:31.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-7328' Apr 20 16:30:31.743: INFO: stderr: "" Apr 20 16:30:31.743: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Apr 20 16:30:41.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-7328 -o json' Apr 20 16:30:41.890: INFO: stderr: "" Apr 20 16:30:41.890: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2021-04-20T16:30:31Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-7328\",\n \"resourceVersion\": \"1297163\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7328/pods/e2e-test-nginx-pod\",\n \"uid\": \"db7ab3a3-2bbc-49d9-a757-a02244476307\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-mbt47\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-mbt47\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-mbt47\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-04-20T16:30:31Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-04-20T16:30:38Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-04-20T16:30:38Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-04-20T16:30:31Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://bcf23bf7cfe362c6392e720db737b44148c3bc3ab2973efc37b4bfbb559ff4ab\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-04-20T16:30:38Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.25\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-04-20T16:30:31Z\"\n }\n}\n" STEP: replace the image in the pod Apr 20 16:30:41.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7328' Apr 20 16:30:42.162: INFO: stderr: "" Apr 20 16:30:42.162: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Apr 20 16:30:42.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7328' Apr 20 16:30:59.238: INFO: stderr: "" Apr 20 16:30:59.238: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:30:59.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7328" for this suite. Apr 20 16:31:05.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:31:05.380: INFO: namespace kubectl-7328 deletion completed in 6.12673282s • [SLOW TEST:34.792 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:31:05.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-e4d3a451-12d1-43a0-ad6f-7b112b076075 STEP: Creating a pod to test consume secrets Apr 20 16:31:05.442: INFO: Waiting up to 5m0s for pod "pod-secrets-8db74b4a-3e83-4cb0-b09b-aa088e01856f" in namespace "secrets-7613" to be "success or failure" Apr 20 16:31:05.444: INFO: Pod "pod-secrets-8db74b4a-3e83-4cb0-b09b-aa088e01856f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.845849ms Apr 20 16:31:07.449: INFO: Pod "pod-secrets-8db74b4a-3e83-4cb0-b09b-aa088e01856f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006575601s Apr 20 16:31:09.453: INFO: Pod "pod-secrets-8db74b4a-3e83-4cb0-b09b-aa088e01856f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010359678s Apr 20 16:31:11.746: INFO: Pod "pod-secrets-8db74b4a-3e83-4cb0-b09b-aa088e01856f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.303843439s Apr 20 16:31:13.750: INFO: Pod "pod-secrets-8db74b4a-3e83-4cb0-b09b-aa088e01856f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.30769133s Apr 20 16:31:15.753: INFO: Pod "pod-secrets-8db74b4a-3e83-4cb0-b09b-aa088e01856f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.310654584s Apr 20 16:31:17.757: INFO: Pod "pod-secrets-8db74b4a-3e83-4cb0-b09b-aa088e01856f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.314534734s STEP: Saw pod success Apr 20 16:31:17.757: INFO: Pod "pod-secrets-8db74b4a-3e83-4cb0-b09b-aa088e01856f" satisfied condition "success or failure" Apr 20 16:31:17.760: INFO: Trying to get logs from node iruya-worker pod pod-secrets-8db74b4a-3e83-4cb0-b09b-aa088e01856f container secret-volume-test: STEP: delete the pod Apr 20 16:31:17.775: INFO: Waiting for pod pod-secrets-8db74b4a-3e83-4cb0-b09b-aa088e01856f to disappear Apr 20 16:31:17.792: INFO: Pod pod-secrets-8db74b4a-3e83-4cb0-b09b-aa088e01856f no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:31:17.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7613" for this suite. Apr 20 16:31:23.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:31:23.908: INFO: namespace secrets-7613 deletion completed in 6.112871131s • [SLOW TEST:18.527 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:31:23.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Apr 20 16:31:23.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8659' Apr 20 16:31:24.209: INFO: stderr: "" Apr 20 16:31:24.209: INFO: stdout: "pod/pause created\n" Apr 20 16:31:24.209: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 20 16:31:24.209: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8659" to be "running and ready" Apr 20 16:31:24.212: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.829795ms Apr 20 16:31:26.215: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005841957s Apr 20 16:31:28.219: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009797443s Apr 20 16:31:30.223: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013511763s Apr 20 16:31:32.226: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016763953s Apr 20 16:31:34.230: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.020410196s Apr 20 16:31:34.230: INFO: Pod "pause" satisfied condition "running and ready" Apr 20 16:31:34.230: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Apr 20 16:31:34.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8659' Apr 20 16:31:34.337: INFO: stderr: "" Apr 20 16:31:34.337: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 20 16:31:34.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8659' Apr 20 16:31:34.432: INFO: stderr: "" Apr 20 16:31:34.432: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 20 16:31:34.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8659' Apr 20 16:31:34.555: INFO: stderr: "" Apr 20 16:31:34.555: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 20 16:31:34.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8659' Apr 20 16:31:34.643: INFO: stderr: "" Apr 20 16:31:34.643: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s \n" [AfterEach] [k8s.io] Kubectl label /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Apr 20 16:31:34.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8659' Apr 20 16:31:34.754: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 20 16:31:34.754: INFO: stdout: "pod \"pause\" force deleted\n" Apr 20 16:31:34.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8659' Apr 20 16:31:34.845: INFO: stderr: "No resources found.\n" Apr 20 16:31:34.845: INFO: stdout: "" Apr 20 16:31:34.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8659 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 20 16:31:34.929: INFO: stderr: "" Apr 20 16:31:34.929: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:31:34.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8659" for this suite. Apr 20 16:31:41.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:31:41.468: INFO: namespace kubectl-8659 deletion completed in 6.535555164s • [SLOW TEST:17.560 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:31:41.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-99009e93-9902-4eb1-82c9-556d665f7137 in namespace container-probe-2662 Apr 20 16:31:49.557: INFO: Started pod test-webserver-99009e93-9902-4eb1-82c9-556d665f7137 in namespace container-probe-2662 STEP: checking the pod's current state and verifying that restartCount is present Apr 20 16:31:49.559: INFO: Initial restart count of pod test-webserver-99009e93-9902-4eb1-82c9-556d665f7137 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:35:52.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2662" for this suite. Apr 20 16:35:59.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:35:59.616: INFO: namespace container-probe-2662 deletion completed in 6.849083464s • [SLOW TEST:258.148 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:35:59.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 20 16:36:05.883: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 20 16:36:21.122: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:36:21.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6319" for this suite. Apr 20 16:36:28.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:36:28.911: INFO: namespace pods-6319 deletion completed in 7.423215626s • [SLOW TEST:29.295 seconds] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:36:28.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 20 16:36:29.855: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e3fbff1c-7c70-4b19-96af-7ddaed14c481" in namespace "downward-api-1228" to be "success or failure" Apr 20 16:36:29.927: INFO: Pod "downwardapi-volume-e3fbff1c-7c70-4b19-96af-7ddaed14c481": Phase="Pending", Reason="", readiness=false. Elapsed: 72.531991ms Apr 20 16:36:31.957: INFO: Pod "downwardapi-volume-e3fbff1c-7c70-4b19-96af-7ddaed14c481": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102257338s Apr 20 16:36:34.149: INFO: Pod "downwardapi-volume-e3fbff1c-7c70-4b19-96af-7ddaed14c481": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294272775s Apr 20 16:36:36.152: INFO: Pod "downwardapi-volume-e3fbff1c-7c70-4b19-96af-7ddaed14c481": Phase="Running", Reason="", readiness=true. Elapsed: 6.297003768s Apr 20 16:36:38.155: INFO: Pod "downwardapi-volume-e3fbff1c-7c70-4b19-96af-7ddaed14c481": Phase="Running", Reason="", readiness=true. Elapsed: 8.300390037s Apr 20 16:36:40.383: INFO: Pod "downwardapi-volume-e3fbff1c-7c70-4b19-96af-7ddaed14c481": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.527772236s STEP: Saw pod success Apr 20 16:36:40.383: INFO: Pod "downwardapi-volume-e3fbff1c-7c70-4b19-96af-7ddaed14c481" satisfied condition "success or failure" Apr 20 16:36:40.384: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e3fbff1c-7c70-4b19-96af-7ddaed14c481 container client-container: STEP: delete the pod Apr 20 16:36:40.450: INFO: Waiting for pod downwardapi-volume-e3fbff1c-7c70-4b19-96af-7ddaed14c481 to disappear Apr 20 16:36:40.463: INFO: Pod downwardapi-volume-e3fbff1c-7c70-4b19-96af-7ddaed14c481 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:36:40.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1228" for this suite. Apr 20 16:36:46.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:36:46.968: INFO: namespace downward-api-1228 deletion completed in 6.501836598s • [SLOW TEST:18.056 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:36:46.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Apr 20 16:36:47.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 20 16:36:47.151: INFO: stderr: "" Apr 20 16:36:47.151: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:40269\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:40269/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:36:47.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2753" for this suite. Apr 20 16:36:53.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:36:53.291: INFO: namespace kubectl-2753 deletion completed in 6.093237043s • [SLOW TEST:6.323 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:36:53.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-mmlq STEP: Creating a pod to test atomic-volume-subpath Apr 20 16:36:53.456: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-mmlq" in namespace "subpath-3893" to be "success or failure" Apr 20 16:36:53.467: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.280469ms Apr 20 16:36:55.470: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013401953s Apr 20 16:36:57.489: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032023644s Apr 20 16:36:59.492: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035885541s Apr 20 16:37:01.496: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039476189s Apr 20 16:37:03.576: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119799543s Apr 20 16:37:05.579: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Pending", Reason="", readiness=false. Elapsed: 12.122549831s Apr 20 16:37:07.581: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Running", Reason="", readiness=true. Elapsed: 14.124873605s Apr 20 16:37:09.584: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Running", Reason="", readiness=true. Elapsed: 16.127551799s Apr 20 16:37:11.587: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Running", Reason="", readiness=true. Elapsed: 18.130872444s Apr 20 16:37:13.591: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Running", Reason="", readiness=true. Elapsed: 20.13468478s Apr 20 16:37:15.629: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Running", Reason="", readiness=true. Elapsed: 22.172007559s Apr 20 16:37:17.631: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Running", Reason="", readiness=true. Elapsed: 24.174971276s Apr 20 16:37:19.831: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Running", Reason="", readiness=true. Elapsed: 26.374728426s Apr 20 16:37:21.834: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Running", Reason="", readiness=true. Elapsed: 28.37717877s Apr 20 16:37:23.837: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Running", Reason="", readiness=true. Elapsed: 30.380270373s Apr 20 16:37:25.840: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Running", Reason="", readiness=true. Elapsed: 32.383129467s Apr 20 16:37:27.842: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Running", Reason="", readiness=true. Elapsed: 34.385801429s Apr 20 16:37:29.845: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Running", Reason="", readiness=true. Elapsed: 36.388925317s Apr 20 16:37:31.848: INFO: Pod "pod-subpath-test-downwardapi-mmlq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.391923966s STEP: Saw pod success Apr 20 16:37:31.848: INFO: Pod "pod-subpath-test-downwardapi-mmlq" satisfied condition "success or failure" Apr 20 16:37:31.850: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-mmlq container test-container-subpath-downwardapi-mmlq: STEP: delete the pod Apr 20 16:37:31.871: INFO: Waiting for pod pod-subpath-test-downwardapi-mmlq to disappear Apr 20 16:37:32.970: INFO: Pod pod-subpath-test-downwardapi-mmlq no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-mmlq Apr 20 16:37:32.970: INFO: Deleting pod "pod-subpath-test-downwardapi-mmlq" in namespace "subpath-3893" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:37:32.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3893" for this suite. Apr 20 16:37:39.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:37:39.213: INFO: namespace subpath-3893 deletion completed in 6.191938869s • [SLOW TEST:45.922 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:37:39.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-0d23d79e-2d7c-4f5d-a90e-a78a2ce652b5 STEP: Creating configMap with name cm-test-opt-upd-f632058a-dc48-4ab6-9eea-f72d1028ae5f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-0d23d79e-2d7c-4f5d-a90e-a78a2ce652b5 STEP: Updating configmap cm-test-opt-upd-f632058a-dc48-4ab6-9eea-f72d1028ae5f STEP: Creating configMap with name cm-test-opt-create-990f9fcc-5f68-4daf-91c5-79fad579b33e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:39:28.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6411" for this suite. Apr 20 16:40:11.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:40:11.336: INFO: namespace configmap-6411 deletion completed in 42.47051699s • [SLOW TEST:152.123 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:40:11.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 20 16:40:14.239: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2a361999-72a7-494e-ba79-35f9be29d505" in namespace "projected-8034" to be "success or failure" Apr 20 16:40:14.578: INFO: Pod "downwardapi-volume-2a361999-72a7-494e-ba79-35f9be29d505": Phase="Pending", Reason="", readiness=false. Elapsed: 338.958805ms Apr 20 16:40:17.941: INFO: Pod "downwardapi-volume-2a361999-72a7-494e-ba79-35f9be29d505": Phase="Pending", Reason="", readiness=false. Elapsed: 3.702108081s Apr 20 16:40:20.242: INFO: Pod "downwardapi-volume-2a361999-72a7-494e-ba79-35f9be29d505": Phase="Pending", Reason="", readiness=false. Elapsed: 6.003082557s Apr 20 16:40:22.245: INFO: Pod "downwardapi-volume-2a361999-72a7-494e-ba79-35f9be29d505": Phase="Pending", Reason="", readiness=false. Elapsed: 8.00656786s Apr 20 16:40:24.494: INFO: Pod "downwardapi-volume-2a361999-72a7-494e-ba79-35f9be29d505": Phase="Pending", Reason="", readiness=false. Elapsed: 10.255126186s Apr 20 16:40:26.595: INFO: Pod "downwardapi-volume-2a361999-72a7-494e-ba79-35f9be29d505": Phase="Running", Reason="", readiness=true. Elapsed: 12.356505379s Apr 20 16:40:28.600: INFO: Pod "downwardapi-volume-2a361999-72a7-494e-ba79-35f9be29d505": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.360909033s STEP: Saw pod success Apr 20 16:40:28.600: INFO: Pod "downwardapi-volume-2a361999-72a7-494e-ba79-35f9be29d505" satisfied condition "success or failure" Apr 20 16:40:28.603: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-2a361999-72a7-494e-ba79-35f9be29d505 container client-container: STEP: delete the pod Apr 20 16:40:29.733: INFO: Waiting for pod downwardapi-volume-2a361999-72a7-494e-ba79-35f9be29d505 to disappear Apr 20 16:40:29.943: INFO: Pod downwardapi-volume-2a361999-72a7-494e-ba79-35f9be29d505 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:40:29.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8034" for this suite. Apr 20 16:40:38.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:40:38.368: INFO: namespace projected-8034 deletion completed in 8.421668062s • [SLOW TEST:27.032 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:40:38.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 20 16:41:19.804: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7658 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 16:41:19.804: INFO: >>> kubeConfig: /root/.kube/config Apr 20 16:41:20.175: INFO: Exec stderr: "" Apr 20 16:41:20.175: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7658 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 16:41:20.175: INFO: >>> kubeConfig: /root/.kube/config Apr 20 16:41:20.273: INFO: Exec stderr: "" Apr 20 16:41:20.273: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7658 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 16:41:20.273: INFO: >>> kubeConfig: /root/.kube/config Apr 20 16:41:20.388: INFO: Exec stderr: "" Apr 20 16:41:20.388: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7658 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 16:41:20.388: INFO: >>> kubeConfig: /root/.kube/config Apr 20 16:41:20.710: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 20 16:41:20.710: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7658 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 16:41:20.710: INFO: >>> kubeConfig: /root/.kube/config Apr 20 16:41:20.813: INFO: Exec stderr: "" Apr 20 16:41:20.813: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7658 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 16:41:20.813: INFO: >>> kubeConfig: /root/.kube/config Apr 20 16:41:20.925: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 20 16:41:20.925: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7658 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 16:41:20.925: INFO: >>> kubeConfig: /root/.kube/config Apr 20 16:41:21.317: INFO: Exec stderr: "" Apr 20 16:41:21.317: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7658 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 16:41:21.317: INFO: >>> kubeConfig: /root/.kube/config Apr 20 16:41:21.522: INFO: Exec stderr: "" Apr 20 16:41:21.522: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7658 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 16:41:21.522: INFO: >>> kubeConfig: /root/.kube/config Apr 20 16:41:21.820: INFO: Exec stderr: "" Apr 20 16:41:21.820: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7658 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 16:41:21.821: INFO: >>> kubeConfig: /root/.kube/config Apr 20 16:41:21.985: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:41:21.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7658" for this suite. Apr 20 16:42:24.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:42:24.638: INFO: namespace e2e-kubelet-etc-hosts-7658 deletion completed in 1m2.650285199s • [SLOW TEST:106.270 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:42:24.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-5618, will wait for the garbage collector to delete the pods Apr 20 16:42:38.846: INFO: Deleting Job.batch foo took: 4.190311ms Apr 20 16:42:39.247: INFO: Terminating Job.batch foo pods took: 400.209572ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:43:15.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5618" for this suite. Apr 20 16:43:23.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:43:23.734: INFO: namespace job-5618 deletion completed in 8.143316911s • [SLOW TEST:59.095 seconds] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:43:23.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 20 16:43:34.240: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:43:34.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6315" for this suite. Apr 20 16:43:40.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:43:40.448: INFO: namespace container-runtime-6315 deletion completed in 6.116691189s • [SLOW TEST:16.714 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:43:40.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-fqfk STEP: Creating a pod to test atomic-volume-subpath Apr 20 16:43:40.523: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-fqfk" in namespace "subpath-4717" to be "success or failure" Apr 20 16:43:40.538: INFO: Pod "pod-subpath-test-configmap-fqfk": Phase="Pending", Reason="", readiness=false. Elapsed: 15.555815ms Apr 20 16:43:42.541: INFO: Pod "pod-subpath-test-configmap-fqfk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018274537s Apr 20 16:43:44.544: INFO: Pod "pod-subpath-test-configmap-fqfk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021890031s Apr 20 16:43:46.646: INFO: Pod "pod-subpath-test-configmap-fqfk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123646438s Apr 20 16:43:48.650: INFO: Pod "pod-subpath-test-configmap-fqfk": Phase="Running", Reason="", readiness=true. Elapsed: 8.127268289s Apr 20 16:43:50.653: INFO: Pod "pod-subpath-test-configmap-fqfk": Phase="Running", Reason="", readiness=true. Elapsed: 10.130301928s Apr 20 16:43:52.658: INFO: Pod "pod-subpath-test-configmap-fqfk": Phase="Running", Reason="", readiness=true. Elapsed: 12.135128064s Apr 20 16:43:54.662: INFO: Pod "pod-subpath-test-configmap-fqfk": Phase="Running", Reason="", readiness=true. Elapsed: 14.139551464s Apr 20 16:43:56.665: INFO: Pod "pod-subpath-test-configmap-fqfk": Phase="Running", Reason="", readiness=true. Elapsed: 16.142750722s Apr 20 16:43:58.669: INFO: Pod "pod-subpath-test-configmap-fqfk": Phase="Running", Reason="", readiness=true. Elapsed: 18.14599048s Apr 20 16:44:00.672: INFO: Pod "pod-subpath-test-configmap-fqfk": Phase="Running", Reason="", readiness=true. Elapsed: 20.149000199s Apr 20 16:44:02.674: INFO: Pod "pod-subpath-test-configmap-fqfk": Phase="Running", Reason="", readiness=true. Elapsed: 22.151490164s Apr 20 16:44:04.677: INFO: Pod "pod-subpath-test-configmap-fqfk": Phase="Running", Reason="", readiness=true. Elapsed: 24.154751963s Apr 20 16:44:06.681: INFO: Pod "pod-subpath-test-configmap-fqfk": Phase="Running", Reason="", readiness=true. Elapsed: 26.158150632s Apr 20 16:44:08.784: INFO: Pod "pod-subpath-test-configmap-fqfk": Phase="Running", Reason="", readiness=true. Elapsed: 28.261439288s Apr 20 16:44:10.814: INFO: Pod "pod-subpath-test-configmap-fqfk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.291435499s STEP: Saw pod success Apr 20 16:44:10.814: INFO: Pod "pod-subpath-test-configmap-fqfk" satisfied condition "success or failure" Apr 20 16:44:10.816: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-fqfk container test-container-subpath-configmap-fqfk: STEP: delete the pod Apr 20 16:44:10.871: INFO: Waiting for pod pod-subpath-test-configmap-fqfk to disappear Apr 20 16:44:10.905: INFO: Pod pod-subpath-test-configmap-fqfk no longer exists STEP: Deleting pod pod-subpath-test-configmap-fqfk Apr 20 16:44:10.905: INFO: Deleting pod "pod-subpath-test-configmap-fqfk" in namespace "subpath-4717" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:44:10.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4717" for this suite. Apr 20 16:44:22.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:44:23.147: INFO: namespace subpath-4717 deletion completed in 12.238419107s • [SLOW TEST:42.699 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:44:23.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 20 16:44:23.292: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:44:29.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4780" for this suite. Apr 20 16:45:33.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:45:33.430: INFO: namespace pods-4780 deletion completed in 1m4.090267958s • [SLOW TEST:70.281 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:45:33.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 20 16:45:33.540: INFO: Waiting up to 5m0s for pod "pod-9cf13c1d-1dbb-42d8-a4a3-f26cd2a91141" in namespace "emptydir-9015" to be "success or failure" Apr 20 16:45:33.552: INFO: Pod "pod-9cf13c1d-1dbb-42d8-a4a3-f26cd2a91141": Phase="Pending", Reason="", readiness=false. Elapsed: 12.085453ms Apr 20 16:45:35.556: INFO: Pod "pod-9cf13c1d-1dbb-42d8-a4a3-f26cd2a91141": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015408827s Apr 20 16:45:37.559: INFO: Pod "pod-9cf13c1d-1dbb-42d8-a4a3-f26cd2a91141": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018865072s Apr 20 16:45:39.576: INFO: Pod "pod-9cf13c1d-1dbb-42d8-a4a3-f26cd2a91141": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035303107s Apr 20 16:45:41.580: INFO: Pod "pod-9cf13c1d-1dbb-42d8-a4a3-f26cd2a91141": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039665406s STEP: Saw pod success Apr 20 16:45:41.580: INFO: Pod "pod-9cf13c1d-1dbb-42d8-a4a3-f26cd2a91141" satisfied condition "success or failure" Apr 20 16:45:41.583: INFO: Trying to get logs from node iruya-worker pod pod-9cf13c1d-1dbb-42d8-a4a3-f26cd2a91141 container test-container: STEP: delete the pod Apr 20 16:45:41.931: INFO: Waiting for pod pod-9cf13c1d-1dbb-42d8-a4a3-f26cd2a91141 to disappear Apr 20 16:45:42.133: INFO: Pod pod-9cf13c1d-1dbb-42d8-a4a3-f26cd2a91141 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:45:42.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9015" for this suite. Apr 20 16:45:48.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:45:48.292: INFO: namespace emptydir-9015 deletion completed in 6.155760621s • [SLOW TEST:14.862 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:45:48.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-ef6f9876-0781-48e6-b434-5ab8591f2cfb Apr 20 16:45:48.451: INFO: Pod name my-hostname-basic-ef6f9876-0781-48e6-b434-5ab8591f2cfb: Found 0 pods out of 1 Apr 20 16:45:53.584: INFO: Pod name my-hostname-basic-ef6f9876-0781-48e6-b434-5ab8591f2cfb: Found 1 pods out of 1 Apr 20 16:45:53.584: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ef6f9876-0781-48e6-b434-5ab8591f2cfb" are running Apr 20 16:45:59.867: INFO: Pod "my-hostname-basic-ef6f9876-0781-48e6-b434-5ab8591f2cfb-tqw9g" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-04-20 16:45:48 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-04-20 16:45:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ef6f9876-0781-48e6-b434-5ab8591f2cfb]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-04-20 16:45:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ef6f9876-0781-48e6-b434-5ab8591f2cfb]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-04-20 16:45:48 +0000 UTC Reason: Message:}]) Apr 20 16:45:59.867: INFO: Trying to dial the pod Apr 20 16:46:04.878: INFO: Controller my-hostname-basic-ef6f9876-0781-48e6-b434-5ab8591f2cfb: Got expected result from replica 1 [my-hostname-basic-ef6f9876-0781-48e6-b434-5ab8591f2cfb-tqw9g]: "my-hostname-basic-ef6f9876-0781-48e6-b434-5ab8591f2cfb-tqw9g", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:46:04.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8659" for this suite. Apr 20 16:46:10.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:46:10.985: INFO: namespace replication-controller-8659 deletion completed in 6.103168529s • [SLOW TEST:22.693 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:46:10.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 20 16:46:11.122: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:46:12.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4814" for this suite. Apr 20 16:46:18.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:46:18.316: INFO: namespace custom-resource-definition-4814 deletion completed in 6.111266887s • [SLOW TEST:7.331 seconds] [sig-api-machinery] CustomResourceDefinition resources /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:46:18.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 20 16:46:18.381: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 20 16:46:18.449: INFO: Waiting for terminating namespaces to be deleted... Apr 20 16:46:18.452: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 20 16:46:18.455: INFO: kube-proxy-qp6db from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded) Apr 20 16:46:18.456: INFO: Container kube-proxy ready: true, restart count 0 Apr 20 16:46:18.456: INFO: kindnet-7fbjm from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded) Apr 20 16:46:18.456: INFO: Container kindnet-cni ready: true, restart count 0 Apr 20 16:46:18.456: INFO: chaos-daemon-kbww4 from default started at 2021-04-13 15:47:46 +0000 UTC (1 container statuses recorded) Apr 20 16:46:18.456: INFO: Container chaos-daemon ready: true, restart count 0 Apr 20 16:46:18.456: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 20 16:46:18.459: INFO: chaos-controller-manager-6c68f56f79-plhrb from default started at 2021-04-13 15:47:46 +0000 UTC (1 container statuses recorded) Apr 20 16:46:18.459: INFO: Container chaos-mesh ready: true, restart count 0 Apr 20 16:46:18.459: INFO: kindnet-nxsfn from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded) Apr 20 16:46:18.459: INFO: Container kindnet-cni ready: true, restart count 0 Apr 20 16:46:18.459: INFO: chaos-daemon-5nrq6 from default started at 2021-04-13 15:47:46 +0000 UTC (1 container statuses recorded) Apr 20 16:46:18.459: INFO: Container chaos-daemon ready: true, restart count 0 Apr 20 16:46:18.459: INFO: kube-proxy-pz4cr from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded) Apr 20 16:46:18.459: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16779ec94c6d335d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:46:19.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8079" for this suite. Apr 20 16:46:26.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:46:27.841: INFO: namespace sched-pred-8079 deletion completed in 8.224669243s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:9.524 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:46:27.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 20 16:46:39.106: INFO: Successfully updated pod "annotationupdatee3ab0988-e8a8-4834-ad7a-ba3098d8f33b" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:46:41.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7568" for this suite. Apr 20 16:47:03.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:47:03.727: INFO: namespace projected-7568 deletion completed in 22.243530116s • [SLOW TEST:35.887 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:47:03.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1790 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 20 16:47:04.664: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 20 16:47:35.651: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.223:8080/dial?request=hostName&protocol=udp&host=10.244.2.222&port=8081&tries=1'] Namespace:pod-network-test-1790 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 16:47:35.651: INFO: >>> kubeConfig: /root/.kube/config Apr 20 16:47:35.757: INFO: Waiting for endpoints: map[] Apr 20 16:47:35.760: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.223:8080/dial?request=hostName&protocol=udp&host=10.244.1.35&port=8081&tries=1'] Namespace:pod-network-test-1790 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 16:47:35.760: INFO: >>> kubeConfig: /root/.kube/config Apr 20 16:47:35.862: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:47:35.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1790" for this suite. Apr 20 16:47:57.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:47:57.942: INFO: namespace pod-network-test-1790 deletion completed in 22.075539494s • [SLOW TEST:54.214 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:47:57.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:48:06.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7105" for this suite. Apr 20 16:48:52.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:48:52.644: INFO: namespace kubelet-test-7105 deletion completed in 46.63676769s • [SLOW TEST:54.701 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:48:52.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1282 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-1282 STEP: Creating statefulset with conflicting port in namespace statefulset-1282 STEP: Waiting until pod test-pod will start running in namespace statefulset-1282 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1282 Apr 20 16:48:58.797: INFO: Observed stateful pod in namespace: statefulset-1282, name: ss-0, uid: 47bf29d1-fb85-461f-816a-ffd423fc2664, status phase: Pending. Waiting for statefulset controller to delete. Apr 20 16:48:59.121: INFO: Observed stateful pod in namespace: statefulset-1282, name: ss-0, uid: 47bf29d1-fb85-461f-816a-ffd423fc2664, status phase: Failed. Waiting for statefulset controller to delete. Apr 20 16:48:59.139: INFO: Observed stateful pod in namespace: statefulset-1282, name: ss-0, uid: 47bf29d1-fb85-461f-816a-ffd423fc2664, status phase: Failed. Waiting for statefulset controller to delete. Apr 20 16:48:59.157: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1282 STEP: Removing pod with conflicting port in namespace statefulset-1282 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1282 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 20 16:49:05.280: INFO: Deleting all statefulset in ns statefulset-1282 Apr 20 16:49:05.283: INFO: Scaling statefulset ss to 0 Apr 20 16:49:25.302: INFO: Waiting for statefulset status.replicas updated to 0 Apr 20 16:49:25.305: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:49:25.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1282" for this suite. Apr 20 16:49:31.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:49:31.441: INFO: namespace statefulset-1282 deletion completed in 6.103682782s • [SLOW TEST:38.796 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:49:31.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:49:31.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6455" for this suite. Apr 20 16:49:53.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:49:53.743: INFO: namespace kubelet-test-6455 deletion completed in 22.147389493s • [SLOW TEST:22.302 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:49:53.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0420 16:50:05.141673 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 20 16:50:05.141: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:50:05.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6188" for this suite. Apr 20 16:50:13.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:50:13.412: INFO: namespace gc-6188 deletion completed in 8.230317989s • [SLOW TEST:19.669 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:50:13.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Apr 20 16:50:13.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 20 16:50:14.183: INFO: stderr: "" Apr 20 16:50:14.183: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npingcap.com/v1alpha1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:50:14.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6617" for this suite. Apr 20 16:50:20.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:50:20.444: INFO: namespace kubectl-6617 deletion completed in 6.255224157s • [SLOW TEST:7.031 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:50:20.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-1fb4786b-e2b1-4c17-87ee-8343bd1073c0 STEP: Creating secret with name secret-projected-all-test-volume-8c321ce7-aeec-4691-99d2-a85ac7e54a5a STEP: Creating a pod to test Check all projections for projected volume plugin Apr 20 16:50:20.538: INFO: Waiting up to 5m0s for pod "projected-volume-fd1e6c44-8d26-4a05-877a-27d58cb40cde" in namespace "projected-6282" to be "success or failure" Apr 20 16:50:20.542: INFO: Pod "projected-volume-fd1e6c44-8d26-4a05-877a-27d58cb40cde": Phase="Pending", Reason="", readiness=false. Elapsed: 3.558776ms Apr 20 16:50:22.551: INFO: Pod "projected-volume-fd1e6c44-8d26-4a05-877a-27d58cb40cde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012284762s Apr 20 16:50:24.554: INFO: Pod "projected-volume-fd1e6c44-8d26-4a05-877a-27d58cb40cde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016197134s STEP: Saw pod success Apr 20 16:50:24.555: INFO: Pod "projected-volume-fd1e6c44-8d26-4a05-877a-27d58cb40cde" satisfied condition "success or failure" Apr 20 16:50:24.558: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-fd1e6c44-8d26-4a05-877a-27d58cb40cde container projected-all-volume-test: STEP: delete the pod Apr 20 16:50:24.592: INFO: Waiting for pod projected-volume-fd1e6c44-8d26-4a05-877a-27d58cb40cde to disappear Apr 20 16:50:24.596: INFO: Pod projected-volume-fd1e6c44-8d26-4a05-877a-27d58cb40cde no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:50:24.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6282" for this suite. Apr 20 16:50:30.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:50:30.698: INFO: namespace projected-6282 deletion completed in 6.098211656s • [SLOW TEST:10.254 seconds] [sig-storage] Projected combined /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:50:30.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 20 16:50:30.766: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f1f48fbd-056b-4d21-9ecf-8a3093520a5e" in namespace "downward-api-5317" to be "success or failure" Apr 20 16:50:30.770: INFO: Pod "downwardapi-volume-f1f48fbd-056b-4d21-9ecf-8a3093520a5e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.946531ms Apr 20 16:50:32.774: INFO: Pod "downwardapi-volume-f1f48fbd-056b-4d21-9ecf-8a3093520a5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008156302s Apr 20 16:50:34.779: INFO: Pod "downwardapi-volume-f1f48fbd-056b-4d21-9ecf-8a3093520a5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012704495s STEP: Saw pod success Apr 20 16:50:34.779: INFO: Pod "downwardapi-volume-f1f48fbd-056b-4d21-9ecf-8a3093520a5e" satisfied condition "success or failure" Apr 20 16:50:34.782: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-f1f48fbd-056b-4d21-9ecf-8a3093520a5e container client-container: STEP: delete the pod Apr 20 16:50:34.816: INFO: Waiting for pod downwardapi-volume-f1f48fbd-056b-4d21-9ecf-8a3093520a5e to disappear Apr 20 16:50:34.824: INFO: Pod downwardapi-volume-f1f48fbd-056b-4d21-9ecf-8a3093520a5e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:50:34.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5317" for this suite. Apr 20 16:50:40.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:50:40.926: INFO: namespace downward-api-5317 deletion completed in 6.098975927s • [SLOW TEST:10.227 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:50:40.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 20 16:50:41.010: INFO: Waiting up to 5m0s for pod "pod-65114b39-8feb-4961-ad26-f9ccdfe30745" in namespace "emptydir-8970" to be "success or failure" Apr 20 16:50:41.020: INFO: Pod "pod-65114b39-8feb-4961-ad26-f9ccdfe30745": Phase="Pending", Reason="", readiness=false. Elapsed: 9.856467ms Apr 20 16:50:43.024: INFO: Pod "pod-65114b39-8feb-4961-ad26-f9ccdfe30745": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014459515s Apr 20 16:50:45.028: INFO: Pod "pod-65114b39-8feb-4961-ad26-f9ccdfe30745": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018126392s STEP: Saw pod success Apr 20 16:50:45.028: INFO: Pod "pod-65114b39-8feb-4961-ad26-f9ccdfe30745" satisfied condition "success or failure" Apr 20 16:50:45.030: INFO: Trying to get logs from node iruya-worker2 pod pod-65114b39-8feb-4961-ad26-f9ccdfe30745 container test-container: STEP: delete the pod Apr 20 16:50:45.093: INFO: Waiting for pod pod-65114b39-8feb-4961-ad26-f9ccdfe30745 to disappear Apr 20 16:50:45.149: INFO: Pod pod-65114b39-8feb-4961-ad26-f9ccdfe30745 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:50:45.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8970" for this suite. Apr 20 16:50:51.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:50:51.273: INFO: namespace emptydir-8970 deletion completed in 6.119212086s • [SLOW TEST:10.344 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:50:51.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 20 16:50:51.399: INFO: Waiting up to 5m0s for pod "downward-api-e60b74bc-dbd7-4a52-8390-4f8126a61a07" in namespace "downward-api-5032" to be "success or failure" Apr 20 16:50:51.433: INFO: Pod "downward-api-e60b74bc-dbd7-4a52-8390-4f8126a61a07": Phase="Pending", Reason="", readiness=false. Elapsed: 33.701826ms Apr 20 16:50:53.522: INFO: Pod "downward-api-e60b74bc-dbd7-4a52-8390-4f8126a61a07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122622371s Apr 20 16:50:55.525: INFO: Pod "downward-api-e60b74bc-dbd7-4a52-8390-4f8126a61a07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.12609874s STEP: Saw pod success Apr 20 16:50:55.525: INFO: Pod "downward-api-e60b74bc-dbd7-4a52-8390-4f8126a61a07" satisfied condition "success or failure" Apr 20 16:50:55.527: INFO: Trying to get logs from node iruya-worker pod downward-api-e60b74bc-dbd7-4a52-8390-4f8126a61a07 container dapi-container: STEP: delete the pod Apr 20 16:50:55.579: INFO: Waiting for pod downward-api-e60b74bc-dbd7-4a52-8390-4f8126a61a07 to disappear Apr 20 16:50:55.584: INFO: Pod downward-api-e60b74bc-dbd7-4a52-8390-4f8126a61a07 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:50:55.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5032" for this suite. Apr 20 16:51:01.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:51:01.725: INFO: namespace downward-api-5032 deletion completed in 6.13655558s • [SLOW TEST:10.452 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:51:01.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-8af66fa7-06b9-45f2-b712-daf431408045 STEP: Creating a pod to test consume secrets Apr 20 16:51:01.844: INFO: Waiting up to 5m0s for pod "pod-secrets-b989a67e-562a-48c0-a7cc-5ed646c3b6f5" in namespace "secrets-7086" to be "success or failure" Apr 20 16:51:01.863: INFO: Pod "pod-secrets-b989a67e-562a-48c0-a7cc-5ed646c3b6f5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.088492ms Apr 20 16:51:03.867: INFO: Pod "pod-secrets-b989a67e-562a-48c0-a7cc-5ed646c3b6f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022271273s Apr 20 16:51:05.870: INFO: Pod "pod-secrets-b989a67e-562a-48c0-a7cc-5ed646c3b6f5": Phase="Running", Reason="", readiness=true. Elapsed: 4.025842022s Apr 20 16:51:07.874: INFO: Pod "pod-secrets-b989a67e-562a-48c0-a7cc-5ed646c3b6f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029388679s STEP: Saw pod success Apr 20 16:51:07.874: INFO: Pod "pod-secrets-b989a67e-562a-48c0-a7cc-5ed646c3b6f5" satisfied condition "success or failure" Apr 20 16:51:07.877: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-b989a67e-562a-48c0-a7cc-5ed646c3b6f5 container secret-volume-test: STEP: delete the pod Apr 20 16:51:07.952: INFO: Waiting for pod pod-secrets-b989a67e-562a-48c0-a7cc-5ed646c3b6f5 to disappear Apr 20 16:51:07.954: INFO: Pod pod-secrets-b989a67e-562a-48c0-a7cc-5ed646c3b6f5 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:51:07.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7086" for this suite. Apr 20 16:51:13.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:51:14.145: INFO: namespace secrets-7086 deletion completed in 6.187078652s • [SLOW TEST:12.421 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:51:14.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 20 16:51:14.206: INFO: Waiting up to 5m0s for pod "downwardapi-volume-73b6bcc9-6521-4ac6-b5b0-b21658de705f" in namespace "downward-api-9651" to be "success or failure" Apr 20 16:51:14.241: INFO: Pod "downwardapi-volume-73b6bcc9-6521-4ac6-b5b0-b21658de705f": Phase="Pending", Reason="", readiness=false. Elapsed: 35.315846ms Apr 20 16:51:16.437: INFO: Pod "downwardapi-volume-73b6bcc9-6521-4ac6-b5b0-b21658de705f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231428776s Apr 20 16:51:18.442: INFO: Pod "downwardapi-volume-73b6bcc9-6521-4ac6-b5b0-b21658de705f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.235807449s STEP: Saw pod success Apr 20 16:51:18.442: INFO: Pod "downwardapi-volume-73b6bcc9-6521-4ac6-b5b0-b21658de705f" satisfied condition "success or failure" Apr 20 16:51:18.445: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-73b6bcc9-6521-4ac6-b5b0-b21658de705f container client-container: STEP: delete the pod Apr 20 16:51:18.662: INFO: Waiting for pod downwardapi-volume-73b6bcc9-6521-4ac6-b5b0-b21658de705f to disappear Apr 20 16:51:18.845: INFO: Pod downwardapi-volume-73b6bcc9-6521-4ac6-b5b0-b21658de705f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:51:18.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9651" for this suite. Apr 20 16:51:24.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:51:25.061: INFO: namespace downward-api-9651 deletion completed in 6.158153166s • [SLOW TEST:10.915 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:51:25.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:52:25.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4863" for this suite. Apr 20 16:52:47.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:52:47.334: INFO: namespace container-probe-4863 deletion completed in 22.13343791s • [SLOW TEST:82.273 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:52:47.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:52:47.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-214" for this suite. Apr 20 16:53:09.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:53:09.585: INFO: namespace pods-214 deletion completed in 22.115723262s • [SLOW TEST:22.250 seconds] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:53:09.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 20 16:53:09.668: INFO: Waiting up to 5m0s for pod "downward-api-f2b37428-2230-4b3b-8e2c-8ba74b5be3f2" in namespace "downward-api-409" to be "success or failure" Apr 20 16:53:09.672: INFO: Pod "downward-api-f2b37428-2230-4b3b-8e2c-8ba74b5be3f2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.686106ms Apr 20 16:53:11.769: INFO: Pod "downward-api-f2b37428-2230-4b3b-8e2c-8ba74b5be3f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101078111s Apr 20 16:53:13.772: INFO: Pod "downward-api-f2b37428-2230-4b3b-8e2c-8ba74b5be3f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104263792s Apr 20 16:53:15.775: INFO: Pod "downward-api-f2b37428-2230-4b3b-8e2c-8ba74b5be3f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.10664463s STEP: Saw pod success Apr 20 16:53:15.775: INFO: Pod "downward-api-f2b37428-2230-4b3b-8e2c-8ba74b5be3f2" satisfied condition "success or failure" Apr 20 16:53:15.776: INFO: Trying to get logs from node iruya-worker2 pod downward-api-f2b37428-2230-4b3b-8e2c-8ba74b5be3f2 container dapi-container: STEP: delete the pod Apr 20 16:53:15.809: INFO: Waiting for pod downward-api-f2b37428-2230-4b3b-8e2c-8ba74b5be3f2 to disappear Apr 20 16:53:15.836: INFO: Pod downward-api-f2b37428-2230-4b3b-8e2c-8ba74b5be3f2 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:53:15.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-409" for this suite. Apr 20 16:53:21.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:53:21.963: INFO: namespace downward-api-409 deletion completed in 6.123248192s • [SLOW TEST:12.377 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:53:21.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5293 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 20 16:53:22.017: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 20 16:53:42.461: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.48:8080/dial?request=hostName&protocol=http&host=10.244.2.236&port=8080&tries=1'] Namespace:pod-network-test-5293 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 16:53:42.461: INFO: >>> kubeConfig: /root/.kube/config Apr 20 16:53:42.595: INFO: Waiting for endpoints: map[] Apr 20 16:53:42.599: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.48:8080/dial?request=hostName&protocol=http&host=10.244.1.47&port=8080&tries=1'] Namespace:pod-network-test-5293 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 16:53:42.599: INFO: >>> kubeConfig: /root/.kube/config Apr 20 16:53:42.732: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:53:42.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5293" for this suite. Apr 20 16:54:04.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:54:04.848: INFO: namespace pod-network-test-5293 deletion completed in 22.11035058s • [SLOW TEST:42.885 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:54:04.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Apr 20 16:54:04.929: INFO: Waiting up to 5m0s for pod "var-expansion-b55c23dc-7269-4699-9590-999042e5063f" in namespace "var-expansion-940" to be "success or failure" Apr 20 16:54:04.957: INFO: Pod "var-expansion-b55c23dc-7269-4699-9590-999042e5063f": Phase="Pending", Reason="", readiness=false. Elapsed: 27.726208ms Apr 20 16:54:06.961: INFO: Pod "var-expansion-b55c23dc-7269-4699-9590-999042e5063f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031882188s Apr 20 16:54:08.966: INFO: Pod "var-expansion-b55c23dc-7269-4699-9590-999042e5063f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036200665s STEP: Saw pod success Apr 20 16:54:08.966: INFO: Pod "var-expansion-b55c23dc-7269-4699-9590-999042e5063f" satisfied condition "success or failure" Apr 20 16:54:08.969: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-b55c23dc-7269-4699-9590-999042e5063f container dapi-container: STEP: delete the pod Apr 20 16:54:08.998: INFO: Waiting for pod var-expansion-b55c23dc-7269-4699-9590-999042e5063f to disappear Apr 20 16:54:09.011: INFO: Pod var-expansion-b55c23dc-7269-4699-9590-999042e5063f no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:54:09.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-940" for this suite. Apr 20 16:54:15.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:54:15.123: INFO: namespace var-expansion-940 deletion completed in 6.108663625s • [SLOW TEST:10.275 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:54:15.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Apr 20 16:54:15.736: INFO: created pod pod-service-account-defaultsa Apr 20 16:54:15.736: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 20 16:54:15.742: INFO: created pod pod-service-account-mountsa Apr 20 16:54:15.742: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 20 16:54:15.748: INFO: created pod pod-service-account-nomountsa Apr 20 16:54:15.748: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 20 16:54:15.769: INFO: created pod pod-service-account-defaultsa-mountspec Apr 20 16:54:15.769: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 20 16:54:15.785: INFO: created pod pod-service-account-mountsa-mountspec Apr 20 16:54:15.785: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 20 16:54:15.826: INFO: created pod pod-service-account-nomountsa-mountspec Apr 20 16:54:15.826: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 20 16:54:15.897: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 20 16:54:15.897: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 20 16:54:15.950: INFO: created pod pod-service-account-mountsa-nomountspec Apr 20 16:54:15.950: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 20 16:54:15.981: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 20 16:54:15.981: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:54:15.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1315" for this suite. Apr 20 16:54:44.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:54:44.264: INFO: namespace svcaccounts-1315 deletion completed in 28.193825242s • [SLOW TEST:29.140 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:54:44.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 20 16:54:44.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2046' Apr 20 16:54:47.098: INFO: stderr: "" Apr 20 16:54:47.098: INFO: stdout: "replicationcontroller/redis-master created\n" Apr 20 16:54:47.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2046' Apr 20 16:54:47.377: INFO: stderr: "" Apr 20 16:54:47.377: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Apr 20 16:54:48.382: INFO: Selector matched 1 pods for map[app:redis] Apr 20 16:54:48.382: INFO: Found 0 / 1 Apr 20 16:54:49.381: INFO: Selector matched 1 pods for map[app:redis] Apr 20 16:54:49.381: INFO: Found 0 / 1 Apr 20 16:54:50.382: INFO: Selector matched 1 pods for map[app:redis] Apr 20 16:54:50.382: INFO: Found 0 / 1 Apr 20 16:54:51.381: INFO: Selector matched 1 pods for map[app:redis] Apr 20 16:54:51.381: INFO: Found 1 / 1 Apr 20 16:54:51.381: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 20 16:54:51.383: INFO: Selector matched 1 pods for map[app:redis] Apr 20 16:54:51.383: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 20 16:54:51.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-jhckc --namespace=kubectl-2046' Apr 20 16:54:51.518: INFO: stderr: "" Apr 20 16:54:51.518: INFO: stdout: "Name: redis-master-jhckc\nNamespace: kubectl-2046\nPriority: 0\nNode: iruya-worker/172.18.0.3\nStart Time: Tue, 20 Apr 2021 16:54:47 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.53\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://9a3489f5219e208e6a68d62ded843955264b85cab65b41714b8d333c81b8066e\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 20 Apr 2021 16:54:50 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-nlhk8 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-nlhk8:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-nlhk8\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-2046/redis-master-jhckc to iruya-worker\n Normal Pulled 3s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker Created container redis-master\n Normal Started 1s kubelet, iruya-worker Started container redis-master\n" Apr 20 16:54:51.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-2046' Apr 20 16:54:51.644: INFO: stderr: "" Apr 20 16:54:51.644: INFO: stdout: "Name: redis-master\nNamespace: kubectl-2046\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-jhckc\n" Apr 20 16:54:51.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-2046' Apr 20 16:54:51.746: INFO: stderr: "" Apr 20 16:54:51.746: INFO: stdout: "Name: redis-master\nNamespace: kubectl-2046\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.96.41.8\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.53:6379\nSession Affinity: None\nEvents: \n" Apr 20 16:54:51.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Apr 20 16:54:51.863: INFO: stderr: "" Apr 20 16:54:51.863: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Tue, 13 Apr 2021 08:08:26 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 20 Apr 2021 16:54:20 +0000 Tue, 13 Apr 2021 08:08:25 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 20 Apr 2021 16:54:20 +0000 Tue, 13 Apr 2021 08:08:25 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 20 Apr 2021 16:54:20 +0000 Tue, 13 Apr 2021 08:08:25 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 20 Apr 2021 16:54:20 +0000 Tue, 13 Apr 2021 08:08:56 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.5\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759824Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759824Ki\n pods: 110\nSystem Info:\n Machine ID: a3f1bf480bee4ba1be0d7febdcd2e8d2\n System UUID: 10a84bce-4959-48c9-a590-36d45dfcec7d\n Boot ID: dc0058b1-aa97-45b0-baf9-d3a69a0326a3\n Kernel Version: 4.15.0-141-generic\n OS Image: Ubuntu 20.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0-106-gce4439a8\n Kubelet Version: v1.15.12\n Kube-Proxy Version: v1.15.12\nPodCIDR: 10.244.0.0/24\nProviderID: kind://docker/iruya/iruya-control-plane\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-5d4dd4b4db-jpgqt 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 7d8h\n kube-system coredns-5d4dd4b4db-vvtjr 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 7d8h\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7d8h\n kube-system kindnet-vqf27 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 7d8h\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 7d8h\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 7d8h\n kube-system kube-proxy-hr9lp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7d8h\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 7d8h\n local-path-storage local-path-provisioner-7f465859dc-kvv5n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7d8h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 20 16:54:51.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2046' Apr 20 16:54:51.959: INFO: stderr: "" Apr 20 16:54:51.959: INFO: stdout: "Name: kubectl-2046\nLabels: e2e-framework=kubectl\n e2e-run=983e8289-b5b6-41bb-b833-66f5e3504223\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:54:51.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2046" for this suite. Apr 20 16:55:13.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:55:14.073: INFO: namespace kubectl-2046 deletion completed in 22.110963429s • [SLOW TEST:29.808 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:55:14.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-124254f1-5a18-4fc2-9faa-cd9ebd7892aa STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-124254f1-5a18-4fc2-9faa-cd9ebd7892aa STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:55:20.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6101" for this suite. Apr 20 16:55:42.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:55:42.292: INFO: namespace projected-6101 deletion completed in 22.091465493s • [SLOW TEST:28.218 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:55:42.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 20 16:55:46.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-6cea1218-a926-49ff-90b2-a071ae4c063a -c busybox-main-container --namespace=emptydir-1454 -- cat /usr/share/volumeshare/shareddata.txt' Apr 20 16:55:46.953: INFO: stderr: "" Apr 20 16:55:46.953: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:55:46.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1454" for this suite. Apr 20 16:55:52.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:55:53.088: INFO: namespace emptydir-1454 deletion completed in 6.130297239s • [SLOW TEST:10.795 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:55:53.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-2f5cb72d-50e3-4b92-8c14-280c5eb2f4be STEP: Creating a pod to test consume configMaps Apr 20 16:55:53.184: INFO: Waiting up to 5m0s for pod "pod-configmaps-7d360143-b714-4e2d-a102-55ec3cace7bd" in namespace "configmap-6281" to be "success or failure" Apr 20 16:55:53.187: INFO: Pod "pod-configmaps-7d360143-b714-4e2d-a102-55ec3cace7bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.463775ms Apr 20 16:55:55.191: INFO: Pod "pod-configmaps-7d360143-b714-4e2d-a102-55ec3cace7bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006704763s Apr 20 16:55:57.195: INFO: Pod "pod-configmaps-7d360143-b714-4e2d-a102-55ec3cace7bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01069352s Apr 20 16:55:59.384: INFO: Pod "pod-configmaps-7d360143-b714-4e2d-a102-55ec3cace7bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.199660522s STEP: Saw pod success Apr 20 16:55:59.384: INFO: Pod "pod-configmaps-7d360143-b714-4e2d-a102-55ec3cace7bd" satisfied condition "success or failure" Apr 20 16:55:59.387: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-7d360143-b714-4e2d-a102-55ec3cace7bd container configmap-volume-test: STEP: delete the pod Apr 20 16:55:59.648: INFO: Waiting for pod pod-configmaps-7d360143-b714-4e2d-a102-55ec3cace7bd to disappear Apr 20 16:55:59.682: INFO: Pod pod-configmaps-7d360143-b714-4e2d-a102-55ec3cace7bd no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:55:59.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6281" for this suite. Apr 20 16:56:05.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:56:05.788: INFO: namespace configmap-6281 deletion completed in 6.102153127s • [SLOW TEST:12.699 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:56:05.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 20 16:56:05.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6362' Apr 20 16:56:05.998: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 20 16:56:05.998: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Apr 20 16:56:06.019: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-7xrvp] Apr 20 16:56:06.019: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-7xrvp" in namespace "kubectl-6362" to be "running and ready" Apr 20 16:56:06.054: INFO: Pod "e2e-test-nginx-rc-7xrvp": Phase="Pending", Reason="", readiness=false. Elapsed: 34.863532ms Apr 20 16:56:08.062: INFO: Pod "e2e-test-nginx-rc-7xrvp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042484361s Apr 20 16:56:10.066: INFO: Pod "e2e-test-nginx-rc-7xrvp": Phase="Running", Reason="", readiness=true. Elapsed: 4.04678125s Apr 20 16:56:10.066: INFO: Pod "e2e-test-nginx-rc-7xrvp" satisfied condition "running and ready" Apr 20 16:56:10.066: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-7xrvp] Apr 20 16:56:10.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-6362' Apr 20 16:56:10.178: INFO: stderr: "" Apr 20 16:56:10.178: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Apr 20 16:56:10.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6362' Apr 20 16:56:10.267: INFO: stderr: "" Apr 20 16:56:10.267: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:56:10.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6362" for this suite. Apr 20 16:56:16.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:56:16.379: INFO: namespace kubectl-6362 deletion completed in 6.109451365s • [SLOW TEST:10.591 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:56:16.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0420 16:56:17.169180 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 20 16:56:17.169: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:56:17.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2181" for this suite. Apr 20 16:56:23.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:56:23.279: INFO: namespace gc-2181 deletion completed in 6.107685468s • [SLOW TEST:6.900 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:56:23.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0420 16:56:53.866505 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 20 16:56:53.866: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:56:53.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1374" for this suite. Apr 20 16:57:01.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:57:01.960: INFO: namespace gc-1374 deletion completed in 8.090269304s • [SLOW TEST:38.681 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:57:01.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 20 16:57:02.057: INFO: Waiting up to 5m0s for pod "pod-6fc65958-26a2-465e-abda-d51f23405448" in namespace "emptydir-8324" to be "success or failure" Apr 20 16:57:02.084: INFO: Pod "pod-6fc65958-26a2-465e-abda-d51f23405448": Phase="Pending", Reason="", readiness=false. Elapsed: 26.7795ms Apr 20 16:57:04.087: INFO: Pod "pod-6fc65958-26a2-465e-abda-d51f23405448": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030476658s Apr 20 16:57:06.092: INFO: Pod "pod-6fc65958-26a2-465e-abda-d51f23405448": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035034483s STEP: Saw pod success Apr 20 16:57:06.092: INFO: Pod "pod-6fc65958-26a2-465e-abda-d51f23405448" satisfied condition "success or failure" Apr 20 16:57:06.095: INFO: Trying to get logs from node iruya-worker pod pod-6fc65958-26a2-465e-abda-d51f23405448 container test-container: STEP: delete the pod Apr 20 16:57:06.117: INFO: Waiting for pod pod-6fc65958-26a2-465e-abda-d51f23405448 to disappear Apr 20 16:57:06.121: INFO: Pod pod-6fc65958-26a2-465e-abda-d51f23405448 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:57:06.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8324" for this suite. Apr 20 16:57:12.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:57:12.333: INFO: namespace emptydir-8324 deletion completed in 6.208533058s • [SLOW TEST:10.373 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:57:12.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 20 16:57:12.410: INFO: Waiting up to 5m0s for pod "pod-5d9ffeda-65e4-4e29-8af9-2d539c4c9f0d" in namespace "emptydir-8005" to be "success or failure" Apr 20 16:57:12.426: INFO: Pod "pod-5d9ffeda-65e4-4e29-8af9-2d539c4c9f0d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.140169ms Apr 20 16:57:14.430: INFO: Pod "pod-5d9ffeda-65e4-4e29-8af9-2d539c4c9f0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019899781s Apr 20 16:57:16.473: INFO: Pod "pod-5d9ffeda-65e4-4e29-8af9-2d539c4c9f0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063567775s STEP: Saw pod success Apr 20 16:57:16.473: INFO: Pod "pod-5d9ffeda-65e4-4e29-8af9-2d539c4c9f0d" satisfied condition "success or failure" Apr 20 16:57:16.559: INFO: Trying to get logs from node iruya-worker pod pod-5d9ffeda-65e4-4e29-8af9-2d539c4c9f0d container test-container: STEP: delete the pod Apr 20 16:57:16.691: INFO: Waiting for pod pod-5d9ffeda-65e4-4e29-8af9-2d539c4c9f0d to disappear Apr 20 16:57:16.720: INFO: Pod pod-5d9ffeda-65e4-4e29-8af9-2d539c4c9f0d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:57:16.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8005" for this suite. Apr 20 16:57:22.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:57:22.826: INFO: namespace emptydir-8005 deletion completed in 6.099126563s • [SLOW TEST:10.492 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:57:22.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Apr 20 16:57:22.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6869' Apr 20 16:57:23.140: INFO: stderr: "" Apr 20 16:57:23.140: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 20 16:57:23.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6869' Apr 20 16:57:23.234: INFO: stderr: "" Apr 20 16:57:23.234: INFO: stdout: "update-demo-nautilus-vctgp update-demo-nautilus-w5n5k " Apr 20 16:57:23.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vctgp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6869' Apr 20 16:57:23.315: INFO: stderr: "" Apr 20 16:57:23.315: INFO: stdout: "" Apr 20 16:57:23.315: INFO: update-demo-nautilus-vctgp is created but not running Apr 20 16:57:28.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6869' Apr 20 16:57:28.414: INFO: stderr: "" Apr 20 16:57:28.414: INFO: stdout: "update-demo-nautilus-vctgp update-demo-nautilus-w5n5k " Apr 20 16:57:28.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vctgp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6869' Apr 20 16:57:28.524: INFO: stderr: "" Apr 20 16:57:28.524: INFO: stdout: "true" Apr 20 16:57:28.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vctgp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6869' Apr 20 16:57:28.616: INFO: stderr: "" Apr 20 16:57:28.616: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 20 16:57:28.616: INFO: validating pod update-demo-nautilus-vctgp Apr 20 16:57:28.620: INFO: got data: { "image": "nautilus.jpg" } Apr 20 16:57:28.620: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 20 16:57:28.620: INFO: update-demo-nautilus-vctgp is verified up and running Apr 20 16:57:28.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w5n5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6869' Apr 20 16:57:28.716: INFO: stderr: "" Apr 20 16:57:28.716: INFO: stdout: "true" Apr 20 16:57:28.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w5n5k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6869' Apr 20 16:57:28.809: INFO: stderr: "" Apr 20 16:57:28.809: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 20 16:57:28.809: INFO: validating pod update-demo-nautilus-w5n5k Apr 20 16:57:28.813: INFO: got data: { "image": "nautilus.jpg" } Apr 20 16:57:28.813: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 20 16:57:28.813: INFO: update-demo-nautilus-w5n5k is verified up and running STEP: rolling-update to new replication controller Apr 20 16:57:28.815: INFO: scanned /root for discovery docs: Apr 20 16:57:28.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6869' Apr 20 16:57:51.414: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 20 16:57:51.414: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 20 16:57:51.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6869' Apr 20 16:57:51.505: INFO: stderr: "" Apr 20 16:57:51.505: INFO: stdout: "update-demo-kitten-md6g7 update-demo-kitten-xbj9l " Apr 20 16:57:51.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-md6g7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6869' Apr 20 16:57:51.611: INFO: stderr: "" Apr 20 16:57:51.611: INFO: stdout: "true" Apr 20 16:57:51.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-md6g7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6869' Apr 20 16:57:51.701: INFO: stderr: "" Apr 20 16:57:51.701: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 20 16:57:51.701: INFO: validating pod update-demo-kitten-md6g7 Apr 20 16:57:51.726: INFO: got data: { "image": "kitten.jpg" } Apr 20 16:57:51.726: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 20 16:57:51.727: INFO: update-demo-kitten-md6g7 is verified up and running Apr 20 16:57:51.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xbj9l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6869' Apr 20 16:57:51.824: INFO: stderr: "" Apr 20 16:57:51.824: INFO: stdout: "true" Apr 20 16:57:51.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xbj9l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6869' Apr 20 16:57:51.915: INFO: stderr: "" Apr 20 16:57:51.915: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 20 16:57:51.915: INFO: validating pod update-demo-kitten-xbj9l Apr 20 16:57:51.919: INFO: got data: { "image": "kitten.jpg" } Apr 20 16:57:51.919: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 20 16:57:51.919: INFO: update-demo-kitten-xbj9l is verified up and running [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 16:57:51.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6869" for this suite. Apr 20 16:58:15.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 16:58:16.032: INFO: namespace kubectl-6869 deletion completed in 24.108977995s • [SLOW TEST:53.206 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 16:58:16.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5230 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5230 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5230 Apr 20 16:58:16.134: INFO: Found 0 stateful pods, waiting for 1 Apr 20 16:58:26.139: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 20 16:58:26.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 20 16:58:26.415: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Apr 20 16:58:26.415: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 20 16:58:26.415: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 20 16:58:26.444: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 20 16:58:36.448: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 20 16:58:36.448: INFO: Waiting for statefulset status.replicas updated to 0 Apr 20 16:58:36.480: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999613s Apr 20 16:58:37.485: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.97761269s Apr 20 16:58:38.489: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.97299504s Apr 20 16:58:39.493: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.968561915s Apr 20 16:58:40.497: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.964486775s Apr 20 16:58:41.503: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.960559s Apr 20 16:58:42.506: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.95551022s Apr 20 16:58:43.510: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.951927471s Apr 20 16:58:44.513: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.947843678s Apr 20 16:58:45.516: INFO: Verifying statefulset ss doesn't scale past 1 for another 944.927956ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5230 Apr 20 16:58:46.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 16:58:46.920: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Apr 20 16:58:46.920: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 20 16:58:46.920: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 20 16:58:46.923: INFO: Found 1 stateful pods, waiting for 3 Apr 20 16:58:57.117: INFO: Found 2 stateful pods, waiting for 3 Apr 20 16:59:07.188: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 20 16:59:07.188: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 20 16:59:07.188: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 20 16:59:17.912: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 20 16:59:17.913: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 20 16:59:17.913: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 20 16:59:27.056: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 20 16:59:27.056: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 20 16:59:27.057: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 20 16:59:27.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 20 16:59:28.226: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Apr 20 16:59:28.226: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 20 16:59:28.226: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 20 16:59:28.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 20 16:59:30.696: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Apr 20 16:59:30.696: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 20 16:59:30.696: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 20 16:59:30.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 20 16:59:31.700: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Apr 20 16:59:31.700: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 20 16:59:31.700: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 20 16:59:31.700: INFO: Waiting for statefulset status.replicas updated to 0 Apr 20 16:59:31.796: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Apr 20 16:59:41.880: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 20 16:59:41.880: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 20 16:59:41.880: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 20 16:59:42.034: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999543s Apr 20 16:59:43.182: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.851160312s Apr 20 16:59:44.186: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.703558291s Apr 20 16:59:45.398: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.69970668s Apr 20 16:59:46.440: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.487244201s Apr 20 16:59:47.446: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.444874777s Apr 20 16:59:48.450: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.439701794s Apr 20 16:59:49.789: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.435528012s Apr 20 16:59:50.793: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.096426646s Apr 20 16:59:51.799: INFO: Verifying statefulset ss doesn't scale past 3 for another 92.279314ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5230 Apr 20 16:59:53.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 16:59:53.717: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Apr 20 16:59:53.717: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 20 16:59:53.717: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 20 16:59:53.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 16:59:53.945: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Apr 20 16:59:53.945: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 20 16:59:53.945: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 20 16:59:53.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 16:59:55.109: INFO: rc: 1 Apr 20 16:59:55.109: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "fa3a04c54d0156edcb669ae108f33f028d4d905cb98a5f80a3bc95d1b957fa0c": ttrpc: closed: unknown [] 0xc001071b60 exit status 1 true [0xc003520178 0xc003520190 0xc0035201d0] [0xc003520178 0xc003520190 0xc0035201d0] [0xc003520188 0xc0035201c8] [0xba70e0 0xba70e0] 0xc00266dec0 }: Command stdout: stderr: error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "fa3a04c54d0156edcb669ae108f33f028d4d905cb98a5f80a3bc95d1b957fa0c": ttrpc: closed: unknown error: exit status 1 Apr 20 17:00:05.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:00:05.205: INFO: rc: 1 Apr 20 17:00:05.205: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0032e0090 exit status 1 true [0xc002670010 0xc002670068 0xc0026700a0] [0xc002670010 0xc002670068 0xc0026700a0] [0xc002670060 0xc002670088] [0xba70e0 0xba70e0] 0xc001c756e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:00:15.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:00:15.437: INFO: rc: 1 Apr 20 17:00:15.437: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0032e0150 exit status 1 true [0xc0026700d0 0xc002670130 0xc002670188] [0xc0026700d0 0xc002670130 0xc002670188] [0xc002670108 0xc002670178] [0xba70e0 0xba70e0] 0xc0023c1740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:00:25.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:00:25.534: INFO: rc: 1 Apr 20 17:00:25.534: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002e4aed0 exit status 1 true [0xc0009c3410 0xc0009c34e8 0xc0009c3640] [0xc0009c3410 0xc0009c34e8 0xc0009c3640] [0xc0009c34a0 0xc0009c3588] [0xba70e0 0xba70e0] 0xc0000ee480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:00:35.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:00:35.666: INFO: rc: 1 Apr 20 17:00:35.666: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0032e0210 exit status 1 true [0xc002670198 0xc0026701f0 0xc002670258] [0xc002670198 0xc0026701f0 0xc002670258] [0xc0026701b8 0xc002670230] [0xba70e0 0xba70e0] 0xc002a58ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:00:45.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:00:45.760: INFO: rc: 1 Apr 20 17:00:45.760: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002e4afc0 exit status 1 true [0xc0009c36a8 0xc0009c3748 0xc0009c3818] [0xc0009c36a8 0xc0009c3748 0xc0009c3818] [0xc0009c36f8 0xc0009c37b8] [0xba70e0 0xba70e0] 0xc0029c24e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:00:55.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:00:55.851: INFO: rc: 1 Apr 20 17:00:55.852: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001071c80 exit status 1 true [0xc0035201d8 0xc003520220 0xc003520238] [0xc0035201d8 0xc003520220 0xc003520238] [0xc003520200 0xc003520230] [0xba70e0 0xba70e0] 0xc00281efc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:01:05.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:01:05.953: INFO: rc: 1 Apr 20 17:01:05.953: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00351cae0 exit status 1 true [0xc000759970 0xc0007599d0 0xc000759a20] [0xc000759970 0xc0007599d0 0xc000759a20] [0xc0007599a0 0xc000759a08] [0xba70e0 0xba70e0] 0xc002418fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:01:15.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:01:16.044: INFO: rc: 1 Apr 20 17:01:16.044: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002328090 exit status 1 true [0xc0001a2000 0xc0001a3110 0xc0001a3340] [0xc0001a2000 0xc0001a3110 0xc0001a3340] [0xc0001a2f98 0xc0001a32a0] [0xba70e0 0xba70e0] 0xc0012979e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:01:26.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:01:26.264: INFO: rc: 1 Apr 20 17:01:26.264: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002328150 exit status 1 true [0xc0001a3388 0xc0001a3438 0xc0001a3658] [0xc0001a3388 0xc0001a3438 0xc0001a3658] [0xc0001a3410 0xc0001a35a0] [0xba70e0 0xba70e0] 0xc001c75e60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:01:36.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:01:36.354: INFO: rc: 1 Apr 20 17:01:36.354: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0023282a0 exit status 1 true [0xc0001a3698 0xc0001a3860 0xc0001a39d8] [0xc0001a3698 0xc0001a3860 0xc0001a39d8] [0xc0001a3800 0xc0001a38c8] [0xba70e0 0xba70e0] 0xc00266c960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:01:46.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:01:46.451: INFO: rc: 1 Apr 20 17:01:46.451: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025b60f0 exit status 1 true [0xc002670010 0xc002670068 0xc0026700a0] [0xc002670010 0xc002670068 0xc0026700a0] [0xc002670060 0xc002670088] [0xba70e0 0xba70e0] 0xc0027145a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:01:56.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:01:56.541: INFO: rc: 1 Apr 20 17:01:56.541: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002328390 exit status 1 true [0xc0001a3aa0 0xc0001a3d08 0xc0001a3e20] [0xc0001a3aa0 0xc0001a3d08 0xc0001a3e20] [0xc0001a3cc8 0xc0001a3df0] [0xba70e0 0xba70e0] 0xc0022e8600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:02:06.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:02:06.649: INFO: rc: 1 Apr 20 17:02:06.649: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025b61e0 exit status 1 true [0xc0026700d0 0xc002670130 0xc002670188] [0xc0026700d0 0xc002670130 0xc002670188] [0xc002670108 0xc002670178] [0xba70e0 0xba70e0] 0xc002715140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:02:16.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:02:16.746: INFO: rc: 1 Apr 20 17:02:16.746: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025b62a0 exit status 1 true [0xc002670198 0xc0026701f0 0xc002670258] [0xc002670198 0xc0026701f0 0xc002670258] [0xc0026701b8 0xc002670230] [0xba70e0 0xba70e0] 0xc002a6a900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:02:26.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:02:26.853: INFO: rc: 1 Apr 20 17:02:26.853: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002328480 exit status 1 true [0xc0001a3e60 0xc0001a3f88 0xc003520018] [0xc0001a3e60 0xc0001a3f88 0xc003520018] [0xc0001a3ec8 0xc003520008] [0xba70e0 0xba70e0] 0xc00297eea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:02:36.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:02:37.003: INFO: rc: 1 Apr 20 17:02:37.003: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0028c80f0 exit status 1 true [0xc000758050 0xc000758218 0xc000758610] [0xc000758050 0xc000758218 0xc000758610] [0xc0007581b8 0xc0007583f0] [0xba70e0 0xba70e0] 0xc002813740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:02:47.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:02:47.102: INFO: rc: 1 Apr 20 17:02:47.103: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0028c81e0 exit status 1 true [0xc0007586d0 0xc000758a98 0xc000758d58] [0xc0007586d0 0xc000758a98 0xc000758d58] [0xc000758a58 0xc000758c40] [0xba70e0 0xba70e0] 0xc0029492c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:02:57.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:02:57.205: INFO: rc: 1 Apr 20 17:02:57.206: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc003480120 exit status 1 true [0xc0009c2048 0xc0009c21b8 0xc0009c2318] [0xc0009c2048 0xc0009c21b8 0xc0009c2318] [0xc0009c2198 0xc0009c22b0] [0xba70e0 0xba70e0] 0xc0020d14a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:03:07.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:03:07.568: INFO: rc: 1 Apr 20 17:03:07.568: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0034801e0 exit status 1 true [0xc0009c23b0 0xc0009c24b8 0xc0009c25b8] [0xc0009c23b0 0xc0009c24b8 0xc0009c25b8] [0xc0009c2478 0xc0009c2548] [0xba70e0 0xba70e0] 0xc0020d1b60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:03:17.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:03:17.692: INFO: rc: 1 Apr 20 17:03:17.692: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc003480480 exit status 1 true [0xc0009c26d0 0xc0009c29e8 0xc0009c2c98] [0xc0009c26d0 0xc0009c29e8 0xc0009c2c98] [0xc0009c2898 0xc0009c2bd8] [0xba70e0 0xba70e0] 0xc003110240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:03:27.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:03:27.799: INFO: rc: 1 Apr 20 17:03:27.799: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025b6090 exit status 1 true [0xc0001a2000 0xc0001a3110 0xc0001a3340] [0xc0001a2000 0xc0001a3110 0xc0001a3340] [0xc0001a2f98 0xc0001a32a0] [0xba70e0 0xba70e0] 0xc0020d14a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:03:37.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:03:37.897: INFO: rc: 1 Apr 20 17:03:37.898: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0028c80c0 exit status 1 true [0xc00054c048 0xc003520030 0xc003520080] [0xc00054c048 0xc003520030 0xc003520080] [0xc003520018 0xc003520068] [0xba70e0 0xba70e0] 0xc002813740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:03:47.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:03:47.998: INFO: rc: 1 Apr 20 17:03:47.998: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0028c81b0 exit status 1 true [0xc003520088 0xc0035200c8 0xc003520100] [0xc003520088 0xc0035200c8 0xc003520100] [0xc0035200c0 0xc0035200e8] [0xba70e0 0xba70e0] 0xc0027146c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:03:57.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:03:58.102: INFO: rc: 1 Apr 20 17:03:58.102: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002328180 exit status 1 true [0xc002670010 0xc002670068 0xc0026700a0] [0xc002670010 0xc002670068 0xc0026700a0] [0xc002670060 0xc002670088] [0xba70e0 0xba70e0] 0xc00266c4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:04:08.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:04:08.198: INFO: rc: 1 Apr 20 17:04:08.198: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0023282d0 exit status 1 true [0xc0026700d0 0xc002670130 0xc002670188] [0xc0026700d0 0xc002670130 0xc002670188] [0xc002670108 0xc002670178] [0xba70e0 0xba70e0] 0xc00266dec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:04:18.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:04:18.294: INFO: rc: 1 Apr 20 17:04:18.294: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc003480090 exit status 1 true [0xc000758050 0xc000758218 0xc000758610] [0xc000758050 0xc000758218 0xc000758610] [0xc0007581b8 0xc0007583f0] [0xba70e0 0xba70e0] 0xc001c756e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:04:28.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:04:28.386: INFO: rc: 1 Apr 20 17:04:28.386: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0034801b0 exit status 1 true [0xc0007586d0 0xc000758a98 0xc000758d58] [0xc0007586d0 0xc000758a98 0xc000758d58] [0xc000758a58 0xc000758c40] [0xba70e0 0xba70e0] 0xc0023c1740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:04:38.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:04:38.491: INFO: rc: 1 Apr 20 17:04:38.491: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025b6210 exit status 1 true [0xc0001a3388 0xc0001a3438 0xc0001a3658] [0xc0001a3388 0xc0001a3438 0xc0001a3658] [0xc0001a3410 0xc0001a35a0] [0xba70e0 0xba70e0] 0xc0020d1b60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:04:48.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:04:51.549: INFO: rc: 1 Apr 20 17:04:51.549: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025b6300 exit status 1 true [0xc0001a3698 0xc0001a3860 0xc0001a39d8] [0xc0001a3698 0xc0001a3860 0xc0001a39d8] [0xc0001a3800 0xc0001a38c8] [0xba70e0 0xba70e0] 0xc00297ec00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 20 17:05:01.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 20 17:05:01.652: INFO: rc: 1 Apr 20 17:05:01.652: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Apr 20 17:05:01.652: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 20 17:05:01.664: INFO: Deleting all statefulset in ns statefulset-5230 Apr 20 17:05:01.666: INFO: Scaling statefulset ss to 0 Apr 20 17:05:01.673: INFO: Waiting for statefulset status.replicas updated to 0 Apr 20 17:05:01.674: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:05:01.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5230" for this suite. Apr 20 17:05:07.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:05:07.795: INFO: namespace statefulset-5230 deletion completed in 6.10087601s • [SLOW TEST:411.762 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:05:07.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-5e459838-14ba-44e5-a741-9a6f018d457c STEP: Creating a pod to test consume secrets Apr 20 17:05:07.881: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-aab9e20c-3cb0-4da7-b33a-d2bab726948d" in namespace "projected-5576" to be "success or failure" Apr 20 17:05:07.892: INFO: Pod "pod-projected-secrets-aab9e20c-3cb0-4da7-b33a-d2bab726948d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.778899ms Apr 20 17:05:09.897: INFO: Pod "pod-projected-secrets-aab9e20c-3cb0-4da7-b33a-d2bab726948d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015262565s Apr 20 17:05:11.901: INFO: Pod "pod-projected-secrets-aab9e20c-3cb0-4da7-b33a-d2bab726948d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019377091s STEP: Saw pod success Apr 20 17:05:11.901: INFO: Pod "pod-projected-secrets-aab9e20c-3cb0-4da7-b33a-d2bab726948d" satisfied condition "success or failure" Apr 20 17:05:11.903: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-aab9e20c-3cb0-4da7-b33a-d2bab726948d container projected-secret-volume-test: STEP: delete the pod Apr 20 17:05:11.935: INFO: Waiting for pod pod-projected-secrets-aab9e20c-3cb0-4da7-b33a-d2bab726948d to disappear Apr 20 17:05:11.952: INFO: Pod pod-projected-secrets-aab9e20c-3cb0-4da7-b33a-d2bab726948d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:05:11.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5576" for this suite. Apr 20 17:05:17.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:05:18.076: INFO: namespace projected-5576 deletion completed in 6.120377386s • [SLOW TEST:10.280 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:05:18.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-520ac9ce-b9d4-4ab7-b9c2-44cd29e6d777 STEP: Creating a pod to test consume configMaps Apr 20 17:05:18.158: INFO: Waiting up to 5m0s for pod "pod-configmaps-6d9be53b-c1b6-472f-bab2-b64895e75b80" in namespace "configmap-3620" to be "success or failure" Apr 20 17:05:18.198: INFO: Pod "pod-configmaps-6d9be53b-c1b6-472f-bab2-b64895e75b80": Phase="Pending", Reason="", readiness=false. Elapsed: 40.293695ms Apr 20 17:05:20.202: INFO: Pod "pod-configmaps-6d9be53b-c1b6-472f-bab2-b64895e75b80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044135117s Apr 20 17:05:22.206: INFO: Pod "pod-configmaps-6d9be53b-c1b6-472f-bab2-b64895e75b80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048654303s STEP: Saw pod success Apr 20 17:05:22.206: INFO: Pod "pod-configmaps-6d9be53b-c1b6-472f-bab2-b64895e75b80" satisfied condition "success or failure" Apr 20 17:05:22.209: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-6d9be53b-c1b6-472f-bab2-b64895e75b80 container configmap-volume-test: STEP: delete the pod Apr 20 17:05:22.247: INFO: Waiting for pod pod-configmaps-6d9be53b-c1b6-472f-bab2-b64895e75b80 to disappear Apr 20 17:05:22.283: INFO: Pod pod-configmaps-6d9be53b-c1b6-472f-bab2-b64895e75b80 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:05:22.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3620" for this suite. Apr 20 17:05:28.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:05:28.421: INFO: namespace configmap-3620 deletion completed in 6.134499557s • [SLOW TEST:10.345 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:05:28.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-6089e26b-bb01-4ec7-b8a2-26d9373d133d in namespace container-probe-8298 Apr 20 17:05:32.484: INFO: Started pod liveness-6089e26b-bb01-4ec7-b8a2-26d9373d133d in namespace container-probe-8298 STEP: checking the pod's current state and verifying that restartCount is present Apr 20 17:05:32.487: INFO: Initial restart count of pod liveness-6089e26b-bb01-4ec7-b8a2-26d9373d133d is 0 Apr 20 17:05:52.537: INFO: Restart count of pod container-probe-8298/liveness-6089e26b-bb01-4ec7-b8a2-26d9373d133d is now 1 (20.050487388s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:05:52.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8298" for this suite. Apr 20 17:05:58.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:05:58.723: INFO: namespace container-probe-8298 deletion completed in 6.134612642s • [SLOW TEST:30.302 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:05:58.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 20 17:05:58.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1371' Apr 20 17:05:58.934: INFO: stderr: "" Apr 20 17:05:58.934: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Apr 20 17:05:58.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1371' Apr 20 17:06:09.519: INFO: stderr: "" Apr 20 17:06:09.519: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:06:09.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1371" for this suite. Apr 20 17:06:15.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:06:15.628: INFO: namespace kubectl-1371 deletion completed in 6.100750856s • [SLOW TEST:16.904 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:06:15.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-591d3817-702d-497c-9061-fffb9dd5e532 STEP: Creating a pod to test consume configMaps Apr 20 17:06:15.708: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-abe3a625-e24e-4a2f-a9b7-7f5add979ee9" in namespace "projected-2636" to be "success or failure" Apr 20 17:06:15.726: INFO: Pod "pod-projected-configmaps-abe3a625-e24e-4a2f-a9b7-7f5add979ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.326092ms Apr 20 17:06:17.771: INFO: Pod "pod-projected-configmaps-abe3a625-e24e-4a2f-a9b7-7f5add979ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063838914s Apr 20 17:06:19.775: INFO: Pod "pod-projected-configmaps-abe3a625-e24e-4a2f-a9b7-7f5add979ee9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067560131s STEP: Saw pod success Apr 20 17:06:19.775: INFO: Pod "pod-projected-configmaps-abe3a625-e24e-4a2f-a9b7-7f5add979ee9" satisfied condition "success or failure" Apr 20 17:06:19.778: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-abe3a625-e24e-4a2f-a9b7-7f5add979ee9 container projected-configmap-volume-test: STEP: delete the pod Apr 20 17:06:19.837: INFO: Waiting for pod pod-projected-configmaps-abe3a625-e24e-4a2f-a9b7-7f5add979ee9 to disappear Apr 20 17:06:19.849: INFO: Pod pod-projected-configmaps-abe3a625-e24e-4a2f-a9b7-7f5add979ee9 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:06:19.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2636" for this suite. Apr 20 17:06:25.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:06:25.951: INFO: namespace projected-2636 deletion completed in 6.099250506s • [SLOW TEST:10.323 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:06:25.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Apr 20 17:06:26.007: INFO: Waiting up to 5m0s for pod "pod-7dc14b4f-fe9a-4dee-ba8a-da473da14dbb" in namespace "emptydir-9841" to be "success or failure" Apr 20 17:06:26.011: INFO: Pod "pod-7dc14b4f-fe9a-4dee-ba8a-da473da14dbb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.922137ms Apr 20 17:06:28.015: INFO: Pod "pod-7dc14b4f-fe9a-4dee-ba8a-da473da14dbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008389979s Apr 20 17:06:30.020: INFO: Pod "pod-7dc14b4f-fe9a-4dee-ba8a-da473da14dbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012571703s STEP: Saw pod success Apr 20 17:06:30.020: INFO: Pod "pod-7dc14b4f-fe9a-4dee-ba8a-da473da14dbb" satisfied condition "success or failure" Apr 20 17:06:30.023: INFO: Trying to get logs from node iruya-worker pod pod-7dc14b4f-fe9a-4dee-ba8a-da473da14dbb container test-container: STEP: delete the pod Apr 20 17:06:30.066: INFO: Waiting for pod pod-7dc14b4f-fe9a-4dee-ba8a-da473da14dbb to disappear Apr 20 17:06:30.073: INFO: Pod pod-7dc14b4f-fe9a-4dee-ba8a-da473da14dbb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:06:30.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9841" for this suite. Apr 20 17:06:36.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:06:36.212: INFO: namespace emptydir-9841 deletion completed in 6.135564607s • [SLOW TEST:10.261 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:06:36.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Apr 20 17:06:36.303: INFO: Waiting up to 5m0s for pod "client-containers-10c9dcba-5595-44d1-8617-802cdc5d2e9a" in namespace "containers-650" to be "success or failure" Apr 20 17:06:36.313: INFO: Pod "client-containers-10c9dcba-5595-44d1-8617-802cdc5d2e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028636ms Apr 20 17:06:38.362: INFO: Pod "client-containers-10c9dcba-5595-44d1-8617-802cdc5d2e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058591241s Apr 20 17:06:40.366: INFO: Pod "client-containers-10c9dcba-5595-44d1-8617-802cdc5d2e9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062734384s STEP: Saw pod success Apr 20 17:06:40.366: INFO: Pod "client-containers-10c9dcba-5595-44d1-8617-802cdc5d2e9a" satisfied condition "success or failure" Apr 20 17:06:40.368: INFO: Trying to get logs from node iruya-worker pod client-containers-10c9dcba-5595-44d1-8617-802cdc5d2e9a container test-container: STEP: delete the pod Apr 20 17:06:40.397: INFO: Waiting for pod client-containers-10c9dcba-5595-44d1-8617-802cdc5d2e9a to disappear Apr 20 17:06:40.409: INFO: Pod client-containers-10c9dcba-5595-44d1-8617-802cdc5d2e9a no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:06:40.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-650" for this suite. Apr 20 17:06:46.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:06:46.595: INFO: namespace containers-650 deletion completed in 6.18316164s • [SLOW TEST:10.383 seconds] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:06:46.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-cadda8d5-350c-47fc-ad88-1361125070cb in namespace container-probe-1297 Apr 20 17:06:50.711: INFO: Started pod busybox-cadda8d5-350c-47fc-ad88-1361125070cb in namespace container-probe-1297 STEP: checking the pod's current state and verifying that restartCount is present Apr 20 17:06:50.714: INFO: Initial restart count of pod busybox-cadda8d5-350c-47fc-ad88-1361125070cb is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:10:51.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1297" for this suite. Apr 20 17:10:57.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:10:57.388: INFO: namespace container-probe-1297 deletion completed in 6.120234913s • [SLOW TEST:250.792 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:10:57.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Apr 20 17:10:57.448: INFO: Waiting up to 5m0s for pod "client-containers-d536a4e6-0c20-4b0a-947f-4ce8007cc6a2" in namespace "containers-172" to be "success or failure" Apr 20 17:10:57.452: INFO: Pod "client-containers-d536a4e6-0c20-4b0a-947f-4ce8007cc6a2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.848884ms Apr 20 17:10:59.456: INFO: Pod "client-containers-d536a4e6-0c20-4b0a-947f-4ce8007cc6a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007649074s Apr 20 17:11:01.460: INFO: Pod "client-containers-d536a4e6-0c20-4b0a-947f-4ce8007cc6a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01202115s STEP: Saw pod success Apr 20 17:11:01.460: INFO: Pod "client-containers-d536a4e6-0c20-4b0a-947f-4ce8007cc6a2" satisfied condition "success or failure" Apr 20 17:11:01.463: INFO: Trying to get logs from node iruya-worker pod client-containers-d536a4e6-0c20-4b0a-947f-4ce8007cc6a2 container test-container: STEP: delete the pod Apr 20 17:11:01.518: INFO: Waiting for pod client-containers-d536a4e6-0c20-4b0a-947f-4ce8007cc6a2 to disappear Apr 20 17:11:01.533: INFO: Pod client-containers-d536a4e6-0c20-4b0a-947f-4ce8007cc6a2 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:11:01.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-172" for this suite. Apr 20 17:11:07.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:11:07.658: INFO: namespace containers-172 deletion completed in 6.120426796s • [SLOW TEST:10.269 seconds] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:11:07.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Apr 20 17:11:07.735: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7880" to be "success or failure" Apr 20 17:11:07.756: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 20.366505ms Apr 20 17:11:09.760: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024647597s Apr 20 17:11:11.763: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028080471s Apr 20 17:11:13.766: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 6.031218085s Apr 20 17:11:15.771: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.035262004s STEP: Saw pod success Apr 20 17:11:15.771: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 20 17:11:15.773: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 20 17:11:15.806: INFO: Waiting for pod pod-host-path-test to disappear Apr 20 17:11:15.851: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:11:15.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7880" for this suite. Apr 20 17:11:21.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:11:21.952: INFO: namespace hostpath-7880 deletion completed in 6.096956404s • [SLOW TEST:14.294 seconds] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:11:21.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 20 17:11:22.000: INFO: Creating deployment "nginx-deployment" Apr 20 17:11:22.034: INFO: Waiting for observed generation 1 Apr 20 17:11:24.048: INFO: Waiting for all required pods to come up Apr 20 17:11:24.052: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 20 17:11:34.067: INFO: Waiting for deployment "nginx-deployment" to complete Apr 20 17:11:34.073: INFO: Updating deployment "nginx-deployment" with a non-existent image Apr 20 17:11:34.079: INFO: Updating deployment nginx-deployment Apr 20 17:11:34.079: INFO: Waiting for observed generation 2 Apr 20 17:11:36.109: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 20 17:11:36.112: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 20 17:11:36.114: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 20 17:11:36.119: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 20 17:11:36.119: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 20 17:11:36.121: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 20 17:11:36.125: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Apr 20 17:11:36.125: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Apr 20 17:11:36.130: INFO: Updating deployment nginx-deployment Apr 20 17:11:36.130: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Apr 20 17:11:36.325: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 20 17:11:36.327: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 20 17:11:37.237: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-8727,SelfLink:/apis/apps/v1/namespaces/deployment-8727/deployments/nginx-deployment,UID:76f72389-4da1-4ea1-9ac9-d80e4ae32342,ResourceVersion:1304315,Generation:3,CreationTimestamp:2021-04-20 17:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[{Progressing True 2021-04-20 17:11:34 +0000 UTC 2021-04-20 17:11:22 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2021-04-20 17:11:36 +0000 UTC 2021-04-20 17:11:36 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Apr 20 17:11:37.391: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-8727,SelfLink:/apis/apps/v1/namespaces/deployment-8727/replicasets/nginx-deployment-55fb7cb77f,UID:5fed0091-5a17-4718-ba52-6f005c284c0c,ResourceVersion:1304367,Generation:3,CreationTimestamp:2021-04-20 17:11:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 76f72389-4da1-4ea1-9ac9-d80e4ae32342 0xc002a84ee7 0xc002a84ee8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 20 17:11:37.391: INFO: All old ReplicaSets of Deployment "nginx-deployment": Apr 20 17:11:37.391: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-8727,SelfLink:/apis/apps/v1/namespaces/deployment-8727/replicasets/nginx-deployment-7b8c6f4498,UID:b62445b1-6ad2-48b3-8b53-1194e7fbb885,ResourceVersion:1304365,Generation:3,CreationTimestamp:2021-04-20 17:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 76f72389-4da1-4ea1-9ac9-d80e4ae32342 0xc002a84fb7 0xc002a84fb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Apr 20 17:11:37.475: INFO: Pod "nginx-deployment-55fb7cb77f-5vf6t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5vf6t,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-55fb7cb77f-5vf6t,UID:74cb7936-5ced-4fec-ab05-efb667d0944f,ResourceVersion:1304375,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5fed0091-5a17-4718-ba52-6f005c284c0c 0xc00257ee77 0xc00257ee78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00257eef0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00257ef10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-04-20 17:11:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.475: INFO: Pod "nginx-deployment-55fb7cb77f-7tp8k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7tp8k,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-55fb7cb77f-7tp8k,UID:8fc16891-1f4d-4fc6-a580-886704f171f7,ResourceVersion:1304349,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5fed0091-5a17-4718-ba52-6f005c284c0c 0xc00257efe0 0xc00257efe1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00257f060} {node.kubernetes.io/unreachable Exists NoExecute 0xc00257f080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.475: INFO: Pod "nginx-deployment-55fb7cb77f-92mk2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-92mk2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-55fb7cb77f-92mk2,UID:eedacddf-3d20-4072-82f1-388504b90afa,ResourceVersion:1304283,Generation:0,CreationTimestamp:2021-04-20 17:11:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5fed0091-5a17-4718-ba52-6f005c284c0c 0xc00257f107 0xc00257f108}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00257f180} {node.kubernetes.io/unreachable Exists NoExecute 0xc00257f1a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-04-20 17:11:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.475: INFO: Pod "nginx-deployment-55fb7cb77f-b4dgz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b4dgz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-55fb7cb77f-b4dgz,UID:19be0968-4f04-4468-9c4c-27fad18525e5,ResourceVersion:1304351,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5fed0091-5a17-4718-ba52-6f005c284c0c 0xc00257f270 0xc00257f271}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00257f2f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00257f310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.476: INFO: Pod "nginx-deployment-55fb7cb77f-fgqdv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fgqdv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-55fb7cb77f-fgqdv,UID:4b6a94c1-92ef-4388-bf5a-3011cdc4c8fd,ResourceVersion:1304362,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5fed0091-5a17-4718-ba52-6f005c284c0c 0xc00257f3a7 0xc00257f3a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00257f420} {node.kubernetes.io/unreachable Exists NoExecute 0xc00257f440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:37 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.476: INFO: Pod "nginx-deployment-55fb7cb77f-hhhcs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hhhcs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-55fb7cb77f-hhhcs,UID:e896e949-2f10-468d-b08f-dad6e692a7a4,ResourceVersion:1304292,Generation:0,CreationTimestamp:2021-04-20 17:11:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5fed0091-5a17-4718-ba52-6f005c284c0c 0xc00257f4c7 0xc00257f4c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00257f550} {node.kubernetes.io/unreachable Exists NoExecute 0xc00257f570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-04-20 17:11:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.476: INFO: Pod "nginx-deployment-55fb7cb77f-j2lnt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j2lnt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-55fb7cb77f-j2lnt,UID:b27892a7-5926-4353-a611-5c7db5835676,ResourceVersion:1304334,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5fed0091-5a17-4718-ba52-6f005c284c0c 0xc00257f650 0xc00257f651}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00257f6d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00257f6f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.476: INFO: Pod "nginx-deployment-55fb7cb77f-j9ghc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j9ghc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-55fb7cb77f-j9ghc,UID:9e6cd28f-8e05-4b55-8a66-15d6cfac85c7,ResourceVersion:1304299,Generation:0,CreationTimestamp:2021-04-20 17:11:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5fed0091-5a17-4718-ba52-6f005c284c0c 0xc00257f777 0xc00257f778}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00257f820} {node.kubernetes.io/unreachable Exists NoExecute 0xc00257f840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-04-20 17:11:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.476: INFO: Pod "nginx-deployment-55fb7cb77f-jckbz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jckbz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-55fb7cb77f-jckbz,UID:45c7dc4e-3068-4b7a-92ae-3e3b4760fa8f,ResourceVersion:1304302,Generation:0,CreationTimestamp:2021-04-20 17:11:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5fed0091-5a17-4718-ba52-6f005c284c0c 0xc00257f910 0xc00257f911}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00257f990} {node.kubernetes.io/unreachable Exists NoExecute 0xc00257f9b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-04-20 17:11:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.476: INFO: Pod "nginx-deployment-55fb7cb77f-n69gw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-n69gw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-55fb7cb77f-n69gw,UID:922e08e5-8fa1-4112-8d96-6d48ae488f98,ResourceVersion:1304279,Generation:0,CreationTimestamp:2021-04-20 17:11:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5fed0091-5a17-4718-ba52-6f005c284c0c 0xc00257fa80 0xc00257fa81}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00257fb00} {node.kubernetes.io/unreachable Exists NoExecute 0xc00257fb20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:34 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-04-20 17:11:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.477: INFO: Pod "nginx-deployment-55fb7cb77f-npq5s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-npq5s,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-55fb7cb77f-npq5s,UID:e69cbda8-fdaf-4195-9016-6e19337dfabf,ResourceVersion:1304356,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5fed0091-5a17-4718-ba52-6f005c284c0c 0xc00257fbf0 0xc00257fbf1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00257fc70} {node.kubernetes.io/unreachable Exists NoExecute 0xc00257fc90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.477: INFO: Pod "nginx-deployment-55fb7cb77f-nxd8k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nxd8k,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-55fb7cb77f-nxd8k,UID:ddfe50a4-6a08-430b-9efa-cb48afc1c537,ResourceVersion:1304350,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5fed0091-5a17-4718-ba52-6f005c284c0c 0xc00257fd17 0xc00257fd18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00257fd90} {node.kubernetes.io/unreachable Exists NoExecute 0xc00257fdb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.477: INFO: Pod "nginx-deployment-55fb7cb77f-pjtcj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pjtcj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-55fb7cb77f-pjtcj,UID:e3db00e7-0c77-469e-bf77-623dd24d6830,ResourceVersion:1304333,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5fed0091-5a17-4718-ba52-6f005c284c0c 0xc00257fe37 0xc00257fe38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00257feb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00257fed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.477: INFO: Pod "nginx-deployment-7b8c6f4498-4lk4z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4lk4z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-4lk4z,UID:8592439f-03ce-4a36-9f96-9d6111f0000e,ResourceVersion:1304337,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc00257ff57 0xc00257ff58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00257ffd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00257fff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.477: INFO: Pod "nginx-deployment-7b8c6f4498-799mg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-799mg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-799mg,UID:9dbd4b05-b297-4e63-b850-0962ea36f552,ResourceVersion:1304366,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc002f0a0f7 0xc002f0a0f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0a220} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0a240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-04-20 17:11:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.477: INFO: Pod "nginx-deployment-7b8c6f4498-7xzk7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7xzk7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-7xzk7,UID:a435b578-93c4-4361-8a84-0809a88fabbd,ResourceVersion:1304238,Generation:0,CreationTimestamp:2021-04-20 17:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc002f0a387 0xc002f0a388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0a400} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0a420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:22 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.6,StartTime:2021-04-20 17:11:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-04-20 17:11:32 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://87f4422b6c29c5bb83d84e1412c87b0a5f50bf09d2ecac3a5792a8c22dbcbf8b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.477: INFO: Pod "nginx-deployment-7b8c6f4498-8kqkm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8kqkm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-8kqkm,UID:0a9438ce-6200-4083-860c-6e429f86caba,ResourceVersion:1304329,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc002f0a5c7 0xc002f0a5c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0a740} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0a760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.478: INFO: Pod "nginx-deployment-7b8c6f4498-8mdsd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8mdsd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-8mdsd,UID:8a993aa3-1aa3-4fdf-a2c2-74a6b111216f,ResourceVersion:1304226,Generation:0,CreationTimestamp:2021-04-20 17:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc002f0a817 0xc002f0a818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0a890} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0a8b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:22 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.68,StartTime:2021-04-20 17:11:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-04-20 17:11:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://fd33a9e7a77cf518f62ed6fb6dad3be61b5e2c1f0485149d4a73540735f68480}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.478: INFO: Pod "nginx-deployment-7b8c6f4498-cz6vc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cz6vc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-cz6vc,UID:619ddf2f-7682-4682-8d24-0434a4b0a6fc,ResourceVersion:1304332,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc002f0ab27 0xc002f0ab28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0aba0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0abc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.478: INFO: Pod "nginx-deployment-7b8c6f4498-dgpb2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dgpb2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-dgpb2,UID:bb0304cb-3a7c-44fc-b854-9d7d96d4e37b,ResourceVersion:1304359,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc002f0ace7 0xc002f0ace8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0ade0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0ae30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.478: INFO: Pod "nginx-deployment-7b8c6f4498-fzvqg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fzvqg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-fzvqg,UID:58d10c5f-9658-4869-91a0-046b66ec42de,ResourceVersion:1304354,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc002f0aee7 0xc002f0aee8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0b000} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0b090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.478: INFO: Pod "nginx-deployment-7b8c6f4498-gmv8q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gmv8q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-gmv8q,UID:98ffc701-a4fd-402b-9e33-ab815ebd513c,ResourceVersion:1304358,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc002f0b167 0xc002f0b168}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0b1e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0b230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.479: INFO: Pod "nginx-deployment-7b8c6f4498-h4dhm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h4dhm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-h4dhm,UID:cfe323e9-4a21-4bda-8541-e948599b25f8,ResourceVersion:1304222,Generation:0,CreationTimestamp:2021-04-20 17:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc002f0b2b7 0xc002f0b2b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0b330} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0b350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:22 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.4,StartTime:2021-04-20 17:11:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-04-20 17:11:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://80dff49713b8bdcfa978689061e2030297112c6d9bf9354916b6e9f28c14ff36}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.479: INFO: Pod "nginx-deployment-7b8c6f4498-hv6qj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hv6qj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-hv6qj,UID:ebadb011-80ff-4fc3-bcc6-8e8a3920c462,ResourceVersion:1304345,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc002f0b607 0xc002f0b608}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0b680} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0b6a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.479: INFO: Pod "nginx-deployment-7b8c6f4498-kbpf4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kbpf4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-kbpf4,UID:554ef6c4-0a7b-4547-a12d-f2ecc72db331,ResourceVersion:1304235,Generation:0,CreationTimestamp:2021-04-20 17:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc002f0b727 0xc002f0b728}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0b7b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0b7d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:22 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.5,StartTime:2021-04-20 17:11:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-04-20 17:11:32 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c71e2f5513ec22fc82ee65388ba827a4f22b9956c62fbb5310a80441ca747db1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.479: INFO: Pod "nginx-deployment-7b8c6f4498-kgxbh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kgxbh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-kgxbh,UID:6dc2a7c8-8010-47bf-ac40-f0777b2145bd,ResourceVersion:1304357,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc002f0b8b7 0xc002f0b8b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0b930} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0b950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.479: INFO: Pod "nginx-deployment-7b8c6f4498-kp2wf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kp2wf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-kp2wf,UID:9b1f38e9-226c-4289-ba97-03841d87496d,ResourceVersion:1304346,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc002f0b9d7 0xc002f0b9d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0ba50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0ba70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.479: INFO: Pod "nginx-deployment-7b8c6f4498-ks6nh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ks6nh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-ks6nh,UID:85333adb-6ec5-4c5a-9918-0b60350f7430,ResourceVersion:1304197,Generation:0,CreationTimestamp:2021-04-20 17:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc002f0baf7 0xc002f0baf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0bb80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0bba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:22 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.3,StartTime:2021-04-20 17:11:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-04-20 17:11:28 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0ecb115fbb3aa9bb48cae062db5d47cb477f0da8eec3388130d512fb462e3f1e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.479: INFO: Pod "nginx-deployment-7b8c6f4498-m65nq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-m65nq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-m65nq,UID:2a48b895-d453-477d-967c-47ae6785bde7,ResourceVersion:1304379,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc002f0bc77 0xc002f0bc78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0bcf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0bd10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-04-20 17:11:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.480: INFO: Pod "nginx-deployment-7b8c6f4498-rgdlh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rgdlh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-rgdlh,UID:f42b1f9e-7ae7-45f0-b1cf-84354c97edbc,ResourceVersion:1304204,Generation:0,CreationTimestamp:2021-04-20 17:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc002f0bdd7 0xc002f0bdd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0be50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0be70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:22 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.2,StartTime:2021-04-20 17:11:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-04-20 17:11:28 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b0b5d1d22ed8cdaeb705026652e0e7312fd42d09f09693067f4aae5cdf72916a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.480: INFO: Pod "nginx-deployment-7b8c6f4498-rhz9z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rhz9z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-rhz9z,UID:462db129-5c90-4d7d-8c12-242d0f8448c6,ResourceVersion:1304352,Generation:0,CreationTimestamp:2021-04-20 17:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc002f0bf47 0xc002f0bf48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0bfc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0bfe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.480: INFO: Pod "nginx-deployment-7b8c6f4498-tgz8b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tgz8b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-tgz8b,UID:29c6fc35-f401-43f2-beca-5a69d819fb1e,ResourceVersion:1304242,Generation:0,CreationTimestamp:2021-04-20 17:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc002cc0087 0xc002cc0088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cc0190} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cc01b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:22 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.71,StartTime:2021-04-20 17:11:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-04-20 17:11:32 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9bae8bbc54c66766e61d07ddefbc79f8893714fd5c0d0e637d4e8811621857d4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 20 17:11:37.480: INFO: Pod "nginx-deployment-7b8c6f4498-v4f7l" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v4f7l,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8727,SelfLink:/api/v1/namespaces/deployment-8727/pods/nginx-deployment-7b8c6f4498-v4f7l,UID:138c46dd-8e72-416f-9b94-43ff46b795b2,ResourceVersion:1304198,Generation:0,CreationTimestamp:2021-04-20 17:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b62445b1-6ad2-48b3-8b53-1194e7fbb885 0xc002cc02c7 0xc002cc02c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6pt8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6pt8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6pt8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cc03e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cc0400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:11:22 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.67,StartTime:2021-04-20 17:11:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-04-20 17:11:27 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://07464c7144b5697f2551cf5c64b76bddd1ae3c9d2dce48bb7773f39f5b403675}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:11:37.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8727" for this suite. Apr 20 17:11:55.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:11:56.080: INFO: namespace deployment-8727 deletion completed in 18.544247758s • [SLOW TEST:34.127 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:11:56.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-5367/configmap-test-33861160-6079-4f5b-9f53-5da728ed5ded STEP: Creating a pod to test consume configMaps Apr 20 17:11:56.369: INFO: Waiting up to 5m0s for pod "pod-configmaps-d584db94-a6b4-439f-945c-d4083f1b60fa" in namespace "configmap-5367" to be "success or failure" Apr 20 17:11:56.389: INFO: Pod "pod-configmaps-d584db94-a6b4-439f-945c-d4083f1b60fa": Phase="Pending", Reason="", readiness=false. Elapsed: 19.714313ms Apr 20 17:11:58.482: INFO: Pod "pod-configmaps-d584db94-a6b4-439f-945c-d4083f1b60fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112484735s Apr 20 17:12:00.485: INFO: Pod "pod-configmaps-d584db94-a6b4-439f-945c-d4083f1b60fa": Phase="Running", Reason="", readiness=true. Elapsed: 4.115998558s Apr 20 17:12:02.520: INFO: Pod "pod-configmaps-d584db94-a6b4-439f-945c-d4083f1b60fa": Phase="Running", Reason="", readiness=true. Elapsed: 6.151082334s Apr 20 17:12:04.629: INFO: Pod "pod-configmaps-d584db94-a6b4-439f-945c-d4083f1b60fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.259704024s STEP: Saw pod success Apr 20 17:12:04.629: INFO: Pod "pod-configmaps-d584db94-a6b4-439f-945c-d4083f1b60fa" satisfied condition "success or failure" Apr 20 17:12:04.632: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-d584db94-a6b4-439f-945c-d4083f1b60fa container env-test: STEP: delete the pod Apr 20 17:12:04.753: INFO: Waiting for pod pod-configmaps-d584db94-a6b4-439f-945c-d4083f1b60fa to disappear Apr 20 17:12:04.765: INFO: Pod pod-configmaps-d584db94-a6b4-439f-945c-d4083f1b60fa no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:12:04.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5367" for this suite. Apr 20 17:12:10.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:12:10.874: INFO: namespace configmap-5367 deletion completed in 6.105893016s • [SLOW TEST:14.794 seconds] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:12:10.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 20 17:12:10.918: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4bdfb7c-3aff-44d5-a4e3-1df5ed3bd196" in namespace "projected-7113" to be "success or failure" Apr 20 17:12:10.933: INFO: Pod "downwardapi-volume-c4bdfb7c-3aff-44d5-a4e3-1df5ed3bd196": Phase="Pending", Reason="", readiness=false. Elapsed: 15.692413ms Apr 20 17:12:12.938: INFO: Pod "downwardapi-volume-c4bdfb7c-3aff-44d5-a4e3-1df5ed3bd196": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020508982s Apr 20 17:12:14.943: INFO: Pod "downwardapi-volume-c4bdfb7c-3aff-44d5-a4e3-1df5ed3bd196": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024804418s Apr 20 17:12:16.947: INFO: Pod "downwardapi-volume-c4bdfb7c-3aff-44d5-a4e3-1df5ed3bd196": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029183907s STEP: Saw pod success Apr 20 17:12:16.947: INFO: Pod "downwardapi-volume-c4bdfb7c-3aff-44d5-a4e3-1df5ed3bd196" satisfied condition "success or failure" Apr 20 17:12:16.951: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c4bdfb7c-3aff-44d5-a4e3-1df5ed3bd196 container client-container: STEP: delete the pod Apr 20 17:12:16.971: INFO: Waiting for pod downwardapi-volume-c4bdfb7c-3aff-44d5-a4e3-1df5ed3bd196 to disappear Apr 20 17:12:16.991: INFO: Pod downwardapi-volume-c4bdfb7c-3aff-44d5-a4e3-1df5ed3bd196 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:12:16.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7113" for this suite. Apr 20 17:12:23.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:12:23.100: INFO: namespace projected-7113 deletion completed in 6.104416971s • [SLOW TEST:12.225 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:12:23.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9058 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 20 17:12:23.155: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 20 17:12:45.289: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.86:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9058 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 17:12:45.289: INFO: >>> kubeConfig: /root/.kube/config Apr 20 17:12:45.428: INFO: Found all expected endpoints: [netserver-0] Apr 20 17:12:45.430: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.20:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9058 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 20 17:12:45.430: INFO: >>> kubeConfig: /root/.kube/config Apr 20 17:12:45.559: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:12:45.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9058" for this suite. Apr 20 17:13:09.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:13:09.700: INFO: namespace pod-network-test-9058 deletion completed in 24.136732563s • [SLOW TEST:46.600 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:13:09.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-d3352c9f-b995-4e33-a8ae-72c1f79be85f STEP: Creating secret with name s-test-opt-upd-41066af7-1f3a-47e3-a3ea-405dba5520cf STEP: Creating the pod STEP: Deleting secret s-test-opt-del-d3352c9f-b995-4e33-a8ae-72c1f79be85f STEP: Updating secret s-test-opt-upd-41066af7-1f3a-47e3-a3ea-405dba5520cf STEP: Creating secret with name s-test-opt-create-bf8b005c-0eb0-4179-9d47-22b39c422128 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:14:36.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-930" for this suite. Apr 20 17:14:58.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:14:58.384: INFO: namespace secrets-930 deletion completed in 22.120004579s • [SLOW TEST:108.684 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:14:58.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-22nn STEP: Creating a pod to test atomic-volume-subpath Apr 20 17:14:58.490: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-22nn" in namespace "subpath-6404" to be "success or failure" Apr 20 17:14:58.506: INFO: Pod "pod-subpath-test-configmap-22nn": Phase="Pending", Reason="", readiness=false. Elapsed: 15.958834ms Apr 20 17:15:00.510: INFO: Pod "pod-subpath-test-configmap-22nn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020232644s Apr 20 17:15:02.515: INFO: Pod "pod-subpath-test-configmap-22nn": Phase="Running", Reason="", readiness=true. Elapsed: 4.024369941s Apr 20 17:15:04.519: INFO: Pod "pod-subpath-test-configmap-22nn": Phase="Running", Reason="", readiness=true. Elapsed: 6.028549699s Apr 20 17:15:06.523: INFO: Pod "pod-subpath-test-configmap-22nn": Phase="Running", Reason="", readiness=true. Elapsed: 8.032517567s Apr 20 17:15:08.527: INFO: Pod "pod-subpath-test-configmap-22nn": Phase="Running", Reason="", readiness=true. Elapsed: 10.036403884s Apr 20 17:15:10.531: INFO: Pod "pod-subpath-test-configmap-22nn": Phase="Running", Reason="", readiness=true. Elapsed: 12.040594081s Apr 20 17:15:12.535: INFO: Pod "pod-subpath-test-configmap-22nn": Phase="Running", Reason="", readiness=true. Elapsed: 14.044406022s Apr 20 17:15:14.538: INFO: Pod "pod-subpath-test-configmap-22nn": Phase="Running", Reason="", readiness=true. Elapsed: 16.04809723s Apr 20 17:15:16.543: INFO: Pod "pod-subpath-test-configmap-22nn": Phase="Running", Reason="", readiness=true. Elapsed: 18.052463025s Apr 20 17:15:18.547: INFO: Pod "pod-subpath-test-configmap-22nn": Phase="Running", Reason="", readiness=true. Elapsed: 20.056276782s Apr 20 17:15:20.551: INFO: Pod "pod-subpath-test-configmap-22nn": Phase="Running", Reason="", readiness=true. Elapsed: 22.060563185s Apr 20 17:15:22.555: INFO: Pod "pod-subpath-test-configmap-22nn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.064764907s STEP: Saw pod success Apr 20 17:15:22.555: INFO: Pod "pod-subpath-test-configmap-22nn" satisfied condition "success or failure" Apr 20 17:15:22.558: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-22nn container test-container-subpath-configmap-22nn: STEP: delete the pod Apr 20 17:15:22.593: INFO: Waiting for pod pod-subpath-test-configmap-22nn to disappear Apr 20 17:15:22.620: INFO: Pod pod-subpath-test-configmap-22nn no longer exists STEP: Deleting pod pod-subpath-test-configmap-22nn Apr 20 17:15:22.620: INFO: Deleting pod "pod-subpath-test-configmap-22nn" in namespace "subpath-6404" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:15:22.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6404" for this suite. Apr 20 17:15:28.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:15:28.757: INFO: namespace subpath-6404 deletion completed in 6.130926012s • [SLOW TEST:30.372 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:15:28.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 20 17:15:28.821: INFO: PodSpec: initContainers in spec.initContainers Apr 20 17:16:14.043: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-70f88b31-8a11-4c80-a7d7-e02ece11050f", GenerateName:"", Namespace:"init-container-9180", SelfLink:"/api/v1/namespaces/init-container-9180/pods/pod-init-70f88b31-8a11-4c80-a7d7-e02ece11050f", UID:"700d2145-5e79-4d24-9bba-8e016bdd2171", ResourceVersion:"1305367", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63754535728, loc:(*time.Location)(0x7edea20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"821651130"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-7ktcn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00290f140), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7ktcn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7ktcn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7ktcn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00362b268), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0023458c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00362b420)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00362b440)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00362b448), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00362b44c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754535728, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754535728, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754535728, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754535728, loc:(*time.Location)(0x7edea20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"10.244.2.23", StartTime:(*v1.Time)(0xc003008ee0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc003009140), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001cdf340)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://0a6c9dab95daeccbd47f7e5af8c935a6dfe74b422d458769486d06e47cd666af"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003009160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003009120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:16:14.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9180" for this suite. Apr 20 17:16:36.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:16:36.169: INFO: namespace init-container-9180 deletion completed in 22.104733002s • [SLOW TEST:67.411 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:16:36.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 20 17:16:36.255: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e861fceb-7907-4110-ad47-4032161417b2" in namespace "projected-463" to be "success or failure" Apr 20 17:16:36.262: INFO: Pod "downwardapi-volume-e861fceb-7907-4110-ad47-4032161417b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.695328ms Apr 20 17:16:38.325: INFO: Pod "downwardapi-volume-e861fceb-7907-4110-ad47-4032161417b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0694662s Apr 20 17:16:40.332: INFO: Pod "downwardapi-volume-e861fceb-7907-4110-ad47-4032161417b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076467863s STEP: Saw pod success Apr 20 17:16:40.332: INFO: Pod "downwardapi-volume-e861fceb-7907-4110-ad47-4032161417b2" satisfied condition "success or failure" Apr 20 17:16:40.335: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e861fceb-7907-4110-ad47-4032161417b2 container client-container: STEP: delete the pod Apr 20 17:16:40.398: INFO: Waiting for pod downwardapi-volume-e861fceb-7907-4110-ad47-4032161417b2 to disappear Apr 20 17:16:40.406: INFO: Pod downwardapi-volume-e861fceb-7907-4110-ad47-4032161417b2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:16:40.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-463" for this suite. Apr 20 17:16:46.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:16:46.545: INFO: namespace projected-463 deletion completed in 6.135179541s • [SLOW TEST:10.376 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:16:46.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-3b71ffd9-cb96-47e9-9546-9fdb71516b2f STEP: Creating a pod to test consume secrets Apr 20 17:16:46.618: INFO: Waiting up to 5m0s for pod "pod-secrets-e9ed6edd-e98c-48c6-a0ab-80e701d479e7" in namespace "secrets-4748" to be "success or failure" Apr 20 17:16:46.623: INFO: Pod "pod-secrets-e9ed6edd-e98c-48c6-a0ab-80e701d479e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.468723ms Apr 20 17:16:48.627: INFO: Pod "pod-secrets-e9ed6edd-e98c-48c6-a0ab-80e701d479e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008823519s Apr 20 17:16:50.631: INFO: Pod "pod-secrets-e9ed6edd-e98c-48c6-a0ab-80e701d479e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013135045s STEP: Saw pod success Apr 20 17:16:50.631: INFO: Pod "pod-secrets-e9ed6edd-e98c-48c6-a0ab-80e701d479e7" satisfied condition "success or failure" Apr 20 17:16:50.634: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-e9ed6edd-e98c-48c6-a0ab-80e701d479e7 container secret-volume-test: STEP: delete the pod Apr 20 17:16:50.685: INFO: Waiting for pod pod-secrets-e9ed6edd-e98c-48c6-a0ab-80e701d479e7 to disappear Apr 20 17:16:50.695: INFO: Pod pod-secrets-e9ed6edd-e98c-48c6-a0ab-80e701d479e7 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:16:50.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4748" for this suite. Apr 20 17:16:56.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:16:56.798: INFO: namespace secrets-4748 deletion completed in 6.099720927s • [SLOW TEST:10.252 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:16:56.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 20 17:17:04.909: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 20 17:17:04.921: INFO: Pod pod-with-prestop-http-hook still exists Apr 20 17:17:06.921: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 20 17:17:06.925: INFO: Pod pod-with-prestop-http-hook still exists Apr 20 17:17:08.921: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 20 17:17:08.925: INFO: Pod pod-with-prestop-http-hook still exists Apr 20 17:17:10.921: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 20 17:17:10.926: INFO: Pod pod-with-prestop-http-hook still exists Apr 20 17:17:12.921: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 20 17:17:12.925: INFO: Pod pod-with-prestop-http-hook still exists Apr 20 17:17:14.921: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 20 17:17:14.925: INFO: Pod pod-with-prestop-http-hook still exists Apr 20 17:17:16.921: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 20 17:17:16.926: INFO: Pod pod-with-prestop-http-hook still exists Apr 20 17:17:18.921: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 20 17:17:18.926: INFO: Pod pod-with-prestop-http-hook still exists Apr 20 17:17:20.921: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 20 17:17:20.930: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:17:20.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-473" for this suite. Apr 20 17:17:42.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:17:43.094: INFO: namespace container-lifecycle-hook-473 deletion completed in 22.154315004s • [SLOW TEST:46.295 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:17:43.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-7658 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7658 STEP: Deleting pre-stop pod Apr 20 17:17:56.229: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:17:56.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7658" for this suite. Apr 20 17:18:34.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:18:34.372: INFO: namespace prestop-7658 deletion completed in 38.093182706s • [SLOW TEST:51.278 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:18:34.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-f3f93d44-30d8-4648-91f4-88958b538e6e STEP: Creating a pod to test consume configMaps Apr 20 17:18:34.472: INFO: Waiting up to 5m0s for pod "pod-configmaps-72bdd6fc-4497-49d7-958c-421d4f1793eb" in namespace "configmap-9013" to be "success or failure" Apr 20 17:18:34.482: INFO: Pod "pod-configmaps-72bdd6fc-4497-49d7-958c-421d4f1793eb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.267317ms Apr 20 17:18:36.590: INFO: Pod "pod-configmaps-72bdd6fc-4497-49d7-958c-421d4f1793eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117322186s Apr 20 17:18:38.594: INFO: Pod "pod-configmaps-72bdd6fc-4497-49d7-958c-421d4f1793eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121164723s STEP: Saw pod success Apr 20 17:18:38.594: INFO: Pod "pod-configmaps-72bdd6fc-4497-49d7-958c-421d4f1793eb" satisfied condition "success or failure" Apr 20 17:18:38.596: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-72bdd6fc-4497-49d7-958c-421d4f1793eb container configmap-volume-test: STEP: delete the pod Apr 20 17:18:38.694: INFO: Waiting for pod pod-configmaps-72bdd6fc-4497-49d7-958c-421d4f1793eb to disappear Apr 20 17:18:38.703: INFO: Pod pod-configmaps-72bdd6fc-4497-49d7-958c-421d4f1793eb no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 20 17:18:38.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9013" for this suite. Apr 20 17:18:44.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 20 17:18:44.812: INFO: namespace configmap-9013 deletion completed in 6.105388929s • [SLOW TEST:10.439 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 20 17:18:44.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 20 17:18:44.900: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Apr 20 17:18:55.679: INFO: Successfully updated pod "labelsupdate7a9045bb-2d15-4ee1-b786-3295d5cce9e9"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:18:59.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3292" for this suite.
Apr 20 17:19:21.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:19:21.812: INFO: namespace projected-3292 deletion completed in 22.105971838s

• [SLOW TEST:30.723 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:19:21.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Apr 20 17:19:21.881: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:19:27.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3825" for this suite.
Apr 20 17:19:34.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:19:34.131: INFO: namespace init-container-3825 deletion completed in 6.108389329s

• [SLOW TEST:12.319 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:19:34.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Apr 20 17:19:34.213: INFO: Waiting up to 5m0s for pod "var-expansion-f6daad89-60f8-4765-87b8-9707eb12dd66" in namespace "var-expansion-239" to be "success or failure"
Apr 20 17:19:34.217: INFO: Pod "var-expansion-f6daad89-60f8-4765-87b8-9707eb12dd66": Phase="Pending", Reason="", readiness=false. Elapsed: 3.573069ms
Apr 20 17:19:36.221: INFO: Pod "var-expansion-f6daad89-60f8-4765-87b8-9707eb12dd66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007596812s
Apr 20 17:19:38.225: INFO: Pod "var-expansion-f6daad89-60f8-4765-87b8-9707eb12dd66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012030769s
STEP: Saw pod success
Apr 20 17:19:38.225: INFO: Pod "var-expansion-f6daad89-60f8-4765-87b8-9707eb12dd66" satisfied condition "success or failure"
Apr 20 17:19:38.228: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-f6daad89-60f8-4765-87b8-9707eb12dd66 container dapi-container: 
STEP: delete the pod
Apr 20 17:19:38.265: INFO: Waiting for pod var-expansion-f6daad89-60f8-4765-87b8-9707eb12dd66 to disappear
Apr 20 17:19:38.291: INFO: Pod var-expansion-f6daad89-60f8-4765-87b8-9707eb12dd66 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:19:38.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-239" for this suite.
Apr 20 17:19:44.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:19:44.435: INFO: namespace var-expansion-239 deletion completed in 6.141628911s

• [SLOW TEST:10.303 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:19:44.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-94247c2f-ccbf-41b7-9764-e57632737f40
STEP: Creating a pod to test consume secrets
Apr 20 17:19:44.500: INFO: Waiting up to 5m0s for pod "pod-secrets-b6ca5c6f-b9a8-4e51-866c-85973e2e2fe6" in namespace "secrets-869" to be "success or failure"
Apr 20 17:19:44.504: INFO: Pod "pod-secrets-b6ca5c6f-b9a8-4e51-866c-85973e2e2fe6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192969ms
Apr 20 17:19:46.519: INFO: Pod "pod-secrets-b6ca5c6f-b9a8-4e51-866c-85973e2e2fe6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018706429s
Apr 20 17:19:48.522: INFO: Pod "pod-secrets-b6ca5c6f-b9a8-4e51-866c-85973e2e2fe6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021988018s
STEP: Saw pod success
Apr 20 17:19:48.522: INFO: Pod "pod-secrets-b6ca5c6f-b9a8-4e51-866c-85973e2e2fe6" satisfied condition "success or failure"
Apr 20 17:19:48.524: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-b6ca5c6f-b9a8-4e51-866c-85973e2e2fe6 container secret-volume-test: 
STEP: delete the pod
Apr 20 17:19:48.538: INFO: Waiting for pod pod-secrets-b6ca5c6f-b9a8-4e51-866c-85973e2e2fe6 to disappear
Apr 20 17:19:48.549: INFO: Pod pod-secrets-b6ca5c6f-b9a8-4e51-866c-85973e2e2fe6 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:19:48.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-869" for this suite.
Apr 20 17:19:54.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:19:54.663: INFO: namespace secrets-869 deletion completed in 6.110951966s

• [SLOW TEST:10.227 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:19:54.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Apr 20 17:19:54.743: INFO: namespace kubectl-96
Apr 20 17:19:54.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-96'
Apr 20 17:19:57.567: INFO: stderr: ""
Apr 20 17:19:57.567: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Apr 20 17:19:58.572: INFO: Selector matched 1 pods for map[app:redis]
Apr 20 17:19:58.572: INFO: Found 0 / 1
Apr 20 17:19:59.572: INFO: Selector matched 1 pods for map[app:redis]
Apr 20 17:19:59.572: INFO: Found 0 / 1
Apr 20 17:20:00.571: INFO: Selector matched 1 pods for map[app:redis]
Apr 20 17:20:00.571: INFO: Found 0 / 1
Apr 20 17:20:01.585: INFO: Selector matched 1 pods for map[app:redis]
Apr 20 17:20:01.585: INFO: Found 1 / 1
Apr 20 17:20:01.585: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Apr 20 17:20:01.588: INFO: Selector matched 1 pods for map[app:redis]
Apr 20 17:20:01.588: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Apr 20 17:20:01.588: INFO: wait on redis-master startup in kubectl-96 
Apr 20 17:20:01.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-xj7pl redis-master --namespace=kubectl-96'
Apr 20 17:20:01.702: INFO: stderr: ""
Apr 20 17:20:01.702: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 20 Apr 17:20:00.595 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Apr 17:20:00.595 # Server started, Redis version 3.2.12\n1:M 20 Apr 17:20:00.595 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Apr 17:20:00.595 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Apr 20 17:20:01.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-96'
Apr 20 17:20:01.851: INFO: stderr: ""
Apr 20 17:20:01.851: INFO: stdout: "service/rm2 exposed\n"
Apr 20 17:20:01.861: INFO: Service rm2 in namespace kubectl-96 found.
STEP: exposing service
Apr 20 17:20:03.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-96'
Apr 20 17:20:03.995: INFO: stderr: ""
Apr 20 17:20:03.996: INFO: stdout: "service/rm3 exposed\n"
Apr 20 17:20:04.005: INFO: Service rm3 in namespace kubectl-96 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:20:06.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-96" for this suite.
Apr 20 17:20:28.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:20:28.117: INFO: namespace kubectl-96 deletion completed in 22.102312269s

• [SLOW TEST:33.454 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:20:28.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4744.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4744.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Apr 20 17:20:34.293: INFO: DNS probes using dns-4744/dns-test-cf77ec31-a451-45fd-8915-92d291823060 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:20:34.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4744" for this suite.
Apr 20 17:20:40.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:20:40.471: INFO: namespace dns-4744 deletion completed in 6.13476945s

• [SLOW TEST:12.354 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:20:40.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Apr 20 17:20:40.520: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Apr 20 17:20:40.554: INFO: Waiting for terminating namespaces to be deleted...
Apr 20 17:20:40.556: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Apr 20 17:20:40.568: INFO: kube-proxy-qp6db from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded)
Apr 20 17:20:40.568: INFO: 	Container kube-proxy ready: true, restart count 0
Apr 20 17:20:40.568: INFO: kindnet-7fbjm from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded)
Apr 20 17:20:40.568: INFO: 	Container kindnet-cni ready: true, restart count 0
Apr 20 17:20:40.568: INFO: chaos-daemon-kbww4 from default started at 2021-04-13 15:47:46 +0000 UTC (1 container statuses recorded)
Apr 20 17:20:40.568: INFO: 	Container chaos-daemon ready: true, restart count 0
Apr 20 17:20:40.568: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Apr 20 17:20:40.573: INFO: kindnet-nxsfn from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded)
Apr 20 17:20:40.573: INFO: 	Container kindnet-cni ready: true, restart count 0
Apr 20 17:20:40.573: INFO: chaos-daemon-5nrq6 from default started at 2021-04-13 15:47:46 +0000 UTC (1 container statuses recorded)
Apr 20 17:20:40.573: INFO: 	Container chaos-daemon ready: true, restart count 0
Apr 20 17:20:40.573: INFO: kube-proxy-pz4cr from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded)
Apr 20 17:20:40.573: INFO: 	Container kube-proxy ready: true, restart count 0
Apr 20 17:20:40.573: INFO: chaos-controller-manager-6c68f56f79-plhrb from default started at 2021-04-13 15:47:46 +0000 UTC (1 container statuses recorded)
Apr 20 17:20:40.573: INFO: 	Container chaos-mesh ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-worker
STEP: verifying the node has the label node iruya-worker2
Apr 20 17:20:40.632: INFO: Pod chaos-controller-manager-6c68f56f79-plhrb requesting resource cpu=25m on Node iruya-worker2
Apr 20 17:20:40.632: INFO: Pod chaos-daemon-5nrq6 requesting resource cpu=0m on Node iruya-worker2
Apr 20 17:20:40.632: INFO: Pod chaos-daemon-kbww4 requesting resource cpu=0m on Node iruya-worker
Apr 20 17:20:40.632: INFO: Pod kindnet-7fbjm requesting resource cpu=100m on Node iruya-worker
Apr 20 17:20:40.632: INFO: Pod kindnet-nxsfn requesting resource cpu=100m on Node iruya-worker2
Apr 20 17:20:40.632: INFO: Pod kube-proxy-pz4cr requesting resource cpu=0m on Node iruya-worker2
Apr 20 17:20:40.632: INFO: Pod kube-proxy-qp6db requesting resource cpu=0m on Node iruya-worker
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-516eefa5-567e-4d56-925d-ea7192c19468.1677a0a971cabd27], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3448/filler-pod-516eefa5-567e-4d56-925d-ea7192c19468 to iruya-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-516eefa5-567e-4d56-925d-ea7192c19468.1677a0a9c4cccf57], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-516eefa5-567e-4d56-925d-ea7192c19468.1677a0aa2b43667e], Reason = [Created], Message = [Created container filler-pod-516eefa5-567e-4d56-925d-ea7192c19468]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-516eefa5-567e-4d56-925d-ea7192c19468.1677a0aa4f75a7e2], Reason = [Started], Message = [Started container filler-pod-516eefa5-567e-4d56-925d-ea7192c19468]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8d13d41e-33d2-421e-bfe4-f8b67b470916.1677a0a971c907b2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3448/filler-pod-8d13d41e-33d2-421e-bfe4-f8b67b470916 to iruya-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8d13d41e-33d2-421e-bfe4-f8b67b470916.1677a0aa02abd7bb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8d13d41e-33d2-421e-bfe4-f8b67b470916.1677a0aa59684975], Reason = [Created], Message = [Created container filler-pod-8d13d41e-33d2-421e-bfe4-f8b67b470916]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8d13d41e-33d2-421e-bfe4-f8b67b470916.1677a0aa6aeeabaa], Reason = [Started], Message = [Started container filler-pod-8d13d41e-33d2-421e-bfe4-f8b67b470916]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.1677a0aad87d2a5f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:20:47.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3448" for this suite.
Apr 20 17:20:53.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:20:53.907: INFO: namespace sched-pred-3448 deletion completed in 6.104853816s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:13.435 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:20:53.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Apr 20 17:20:54.266: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0764b351-62f2-4d3d-849d-c48bf2caf58f" in namespace "downward-api-776" to be "success or failure"
Apr 20 17:20:54.304: INFO: Pod "downwardapi-volume-0764b351-62f2-4d3d-849d-c48bf2caf58f": Phase="Pending", Reason="", readiness=false. Elapsed: 37.122439ms
Apr 20 17:20:56.383: INFO: Pod "downwardapi-volume-0764b351-62f2-4d3d-849d-c48bf2caf58f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116416841s
Apr 20 17:20:58.387: INFO: Pod "downwardapi-volume-0764b351-62f2-4d3d-849d-c48bf2caf58f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120312703s
STEP: Saw pod success
Apr 20 17:20:58.387: INFO: Pod "downwardapi-volume-0764b351-62f2-4d3d-849d-c48bf2caf58f" satisfied condition "success or failure"
Apr 20 17:20:58.390: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-0764b351-62f2-4d3d-849d-c48bf2caf58f container client-container: 
STEP: delete the pod
Apr 20 17:20:58.409: INFO: Waiting for pod downwardapi-volume-0764b351-62f2-4d3d-849d-c48bf2caf58f to disappear
Apr 20 17:20:58.413: INFO: Pod downwardapi-volume-0764b351-62f2-4d3d-849d-c48bf2caf58f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:20:58.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-776" for this suite.
Apr 20 17:21:04.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:21:04.714: INFO: namespace downward-api-776 deletion completed in 6.297781577s

• [SLOW TEST:10.807 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:21:04.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Apr 20 17:21:04.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6362'
Apr 20 17:21:05.245: INFO: stderr: ""
Apr 20 17:21:05.245: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Apr 20 17:21:06.250: INFO: Selector matched 1 pods for map[app:redis]
Apr 20 17:21:06.250: INFO: Found 0 / 1
Apr 20 17:21:07.249: INFO: Selector matched 1 pods for map[app:redis]
Apr 20 17:21:07.249: INFO: Found 0 / 1
Apr 20 17:21:08.250: INFO: Selector matched 1 pods for map[app:redis]
Apr 20 17:21:08.250: INFO: Found 1 / 1
Apr 20 17:21:08.250: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Apr 20 17:21:08.254: INFO: Selector matched 1 pods for map[app:redis]
Apr 20 17:21:08.254: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Apr 20 17:21:08.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-w2g79 --namespace=kubectl-6362 -p {"metadata":{"annotations":{"x":"y"}}}'
Apr 20 17:21:08.370: INFO: stderr: ""
Apr 20 17:21:08.370: INFO: stdout: "pod/redis-master-w2g79 patched\n"
STEP: checking annotations
Apr 20 17:21:08.372: INFO: Selector matched 1 pods for map[app:redis]
Apr 20 17:21:08.372: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:21:08.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6362" for this suite.
Apr 20 17:21:30.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:21:30.472: INFO: namespace kubectl-6362 deletion completed in 22.097380251s

• [SLOW TEST:25.758 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:21:30.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Apr 20 17:21:30.551: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2cc4fc4c-588e-496c-9cc1-31b2e111a6a5" in namespace "projected-671" to be "success or failure"
Apr 20 17:21:30.554: INFO: Pod "downwardapi-volume-2cc4fc4c-588e-496c-9cc1-31b2e111a6a5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.481999ms
Apr 20 17:21:32.581: INFO: Pod "downwardapi-volume-2cc4fc4c-588e-496c-9cc1-31b2e111a6a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030110056s
Apr 20 17:21:34.599: INFO: Pod "downwardapi-volume-2cc4fc4c-588e-496c-9cc1-31b2e111a6a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048308648s
STEP: Saw pod success
Apr 20 17:21:34.599: INFO: Pod "downwardapi-volume-2cc4fc4c-588e-496c-9cc1-31b2e111a6a5" satisfied condition "success or failure"
Apr 20 17:21:34.602: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-2cc4fc4c-588e-496c-9cc1-31b2e111a6a5 container client-container: 
STEP: delete the pod
Apr 20 17:21:34.640: INFO: Waiting for pod downwardapi-volume-2cc4fc4c-588e-496c-9cc1-31b2e111a6a5 to disappear
Apr 20 17:21:34.650: INFO: Pod downwardapi-volume-2cc4fc4c-588e-496c-9cc1-31b2e111a6a5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:21:34.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-671" for this suite.
Apr 20 17:21:40.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:21:40.754: INFO: namespace projected-671 deletion completed in 6.100585207s

• [SLOW TEST:10.282 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:21:40.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Apr 20 17:21:45.901: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:21:46.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9589" for this suite.
Apr 20 17:22:08.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:22:09.104: INFO: namespace replicaset-9589 deletion completed in 22.143741167s

• [SLOW TEST:28.350 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:22:09.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Apr 20 17:22:09.196: INFO: Pod name pod-release: Found 0 pods out of 1
Apr 20 17:22:14.201: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:22:15.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3251" for this suite.
Apr 20 17:22:21.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:22:21.385: INFO: namespace replication-controller-3251 deletion completed in 6.163258857s

• [SLOW TEST:12.280 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:22:21.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Apr 20 17:22:22.587: INFO: Pod name wrapped-volume-race-dfcd8985-b40a-465a-9e29-afa539645b7d: Found 0 pods out of 5
Apr 20 17:22:27.594: INFO: Pod name wrapped-volume-race-dfcd8985-b40a-465a-9e29-afa539645b7d: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-dfcd8985-b40a-465a-9e29-afa539645b7d in namespace emptydir-wrapper-7848, will wait for the garbage collector to delete the pods
Apr 20 17:22:41.673: INFO: Deleting ReplicationController wrapped-volume-race-dfcd8985-b40a-465a-9e29-afa539645b7d took: 7.155383ms
Apr 20 17:22:42.074: INFO: Terminating ReplicationController wrapped-volume-race-dfcd8985-b40a-465a-9e29-afa539645b7d pods took: 400.280215ms
STEP: Creating RC which spawns configmap-volume pods
Apr 20 17:23:29.417: INFO: Pod name wrapped-volume-race-45d32f48-7dc9-4a4e-bc3c-13bcea04ccc7: Found 0 pods out of 5
Apr 20 17:23:34.538: INFO: Pod name wrapped-volume-race-45d32f48-7dc9-4a4e-bc3c-13bcea04ccc7: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-45d32f48-7dc9-4a4e-bc3c-13bcea04ccc7 in namespace emptydir-wrapper-7848, will wait for the garbage collector to delete the pods
Apr 20 17:23:48.647: INFO: Deleting ReplicationController wrapped-volume-race-45d32f48-7dc9-4a4e-bc3c-13bcea04ccc7 took: 7.175762ms
Apr 20 17:23:48.947: INFO: Terminating ReplicationController wrapped-volume-race-45d32f48-7dc9-4a4e-bc3c-13bcea04ccc7 pods took: 300.251275ms
STEP: Creating RC which spawns configmap-volume pods
Apr 20 17:24:30.278: INFO: Pod name wrapped-volume-race-8644b7a0-943a-430e-b641-113c5116c5b9: Found 0 pods out of 5
Apr 20 17:24:35.286: INFO: Pod name wrapped-volume-race-8644b7a0-943a-430e-b641-113c5116c5b9: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-8644b7a0-943a-430e-b641-113c5116c5b9 in namespace emptydir-wrapper-7848, will wait for the garbage collector to delete the pods
Apr 20 17:24:51.367: INFO: Deleting ReplicationController wrapped-volume-race-8644b7a0-943a-430e-b641-113c5116c5b9 took: 6.245389ms
Apr 20 17:24:51.767: INFO: Terminating ReplicationController wrapped-volume-race-8644b7a0-943a-430e-b641-113c5116c5b9 pods took: 400.284901ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:25:40.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7848" for this suite.
Apr 20 17:25:48.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:25:48.499: INFO: namespace emptydir-wrapper-7848 deletion completed in 8.12313801s

• [SLOW TEST:207.113 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:25:48.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Apr 20 17:25:48.591: INFO: Waiting up to 5m0s for pod "pod-85bcf106-9838-4ae9-9261-19b6f92df3d1" in namespace "emptydir-9659" to be "success or failure"
Apr 20 17:25:48.594: INFO: Pod "pod-85bcf106-9838-4ae9-9261-19b6f92df3d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.839648ms
Apr 20 17:25:50.598: INFO: Pod "pod-85bcf106-9838-4ae9-9261-19b6f92df3d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006463033s
Apr 20 17:25:52.602: INFO: Pod "pod-85bcf106-9838-4ae9-9261-19b6f92df3d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010516867s
STEP: Saw pod success
Apr 20 17:25:52.602: INFO: Pod "pod-85bcf106-9838-4ae9-9261-19b6f92df3d1" satisfied condition "success or failure"
Apr 20 17:25:52.604: INFO: Trying to get logs from node iruya-worker pod pod-85bcf106-9838-4ae9-9261-19b6f92df3d1 container test-container: 
STEP: delete the pod
Apr 20 17:25:52.644: INFO: Waiting for pod pod-85bcf106-9838-4ae9-9261-19b6f92df3d1 to disappear
Apr 20 17:25:52.651: INFO: Pod pod-85bcf106-9838-4ae9-9261-19b6f92df3d1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:25:52.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9659" for this suite.
Apr 20 17:25:58.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:25:58.778: INFO: namespace emptydir-9659 deletion completed in 6.124068339s

• [SLOW TEST:10.279 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:25:58.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Apr 20 17:25:58.892: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b6e7b4af-832a-4c96-8dc0-57165ab5077b" in namespace "projected-1774" to be "success or failure"
Apr 20 17:25:58.948: INFO: Pod "downwardapi-volume-b6e7b4af-832a-4c96-8dc0-57165ab5077b": Phase="Pending", Reason="", readiness=false. Elapsed: 56.029007ms
Apr 20 17:26:00.956: INFO: Pod "downwardapi-volume-b6e7b4af-832a-4c96-8dc0-57165ab5077b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063969059s
Apr 20 17:26:02.960: INFO: Pod "downwardapi-volume-b6e7b4af-832a-4c96-8dc0-57165ab5077b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068055551s
STEP: Saw pod success
Apr 20 17:26:02.960: INFO: Pod "downwardapi-volume-b6e7b4af-832a-4c96-8dc0-57165ab5077b" satisfied condition "success or failure"
Apr 20 17:26:02.963: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b6e7b4af-832a-4c96-8dc0-57165ab5077b container client-container: 
STEP: delete the pod
Apr 20 17:26:02.995: INFO: Waiting for pod downwardapi-volume-b6e7b4af-832a-4c96-8dc0-57165ab5077b to disappear
Apr 20 17:26:03.006: INFO: Pod downwardapi-volume-b6e7b4af-832a-4c96-8dc0-57165ab5077b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:26:03.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1774" for this suite.
Apr 20 17:26:09.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:26:09.155: INFO: namespace projected-1774 deletion completed in 6.145331651s

• [SLOW TEST:10.376 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:26:09.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Apr 20 17:26:09.248: INFO: Waiting up to 5m0s for pod "pod-3c0ace81-8166-4a3f-b5e2-87d21502f7b0" in namespace "emptydir-6866" to be "success or failure"
Apr 20 17:26:09.252: INFO: Pod "pod-3c0ace81-8166-4a3f-b5e2-87d21502f7b0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.291408ms
Apr 20 17:26:11.273: INFO: Pod "pod-3c0ace81-8166-4a3f-b5e2-87d21502f7b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024711392s
Apr 20 17:26:13.277: INFO: Pod "pod-3c0ace81-8166-4a3f-b5e2-87d21502f7b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028566738s
STEP: Saw pod success
Apr 20 17:26:13.277: INFO: Pod "pod-3c0ace81-8166-4a3f-b5e2-87d21502f7b0" satisfied condition "success or failure"
Apr 20 17:26:13.280: INFO: Trying to get logs from node iruya-worker2 pod pod-3c0ace81-8166-4a3f-b5e2-87d21502f7b0 container test-container: 
STEP: delete the pod
Apr 20 17:26:13.325: INFO: Waiting for pod pod-3c0ace81-8166-4a3f-b5e2-87d21502f7b0 to disappear
Apr 20 17:26:13.342: INFO: Pod pod-3c0ace81-8166-4a3f-b5e2-87d21502f7b0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:26:13.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6866" for this suite.
Apr 20 17:26:19.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:26:19.452: INFO: namespace emptydir-6866 deletion completed in 6.106975034s

• [SLOW TEST:10.297 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:26:19.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8137
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Apr 20 17:26:19.596: INFO: Found 0 stateful pods, waiting for 3
Apr 20 17:26:29.615: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Apr 20 17:26:29.615: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Apr 20 17:26:29.615: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Apr 20 17:26:39.599: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Apr 20 17:26:39.599: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Apr 20 17:26:39.599: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Apr 20 17:26:39.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8137 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Apr 20 17:26:39.867: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Apr 20 17:26:39.867: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Apr 20 17:26:39.867: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Apr 20 17:26:49.931: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Apr 20 17:27:01.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8137 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:27:02.296: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Apr 20 17:27:02.296: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Apr 20 17:27:02.296: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Apr 20 17:27:12.758: INFO: Waiting for StatefulSet statefulset-8137/ss2 to complete update
Apr 20 17:27:12.758: INFO: Waiting for Pod statefulset-8137/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Apr 20 17:27:12.758: INFO: Waiting for Pod statefulset-8137/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Apr 20 17:27:12.758: INFO: Waiting for Pod statefulset-8137/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Apr 20 17:27:23.218: INFO: Waiting for StatefulSet statefulset-8137/ss2 to complete update
Apr 20 17:27:23.218: INFO: Waiting for Pod statefulset-8137/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Apr 20 17:27:23.218: INFO: Waiting for Pod statefulset-8137/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Apr 20 17:27:32.980: INFO: Waiting for StatefulSet statefulset-8137/ss2 to complete update
Apr 20 17:27:32.980: INFO: Waiting for Pod statefulset-8137/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Apr 20 17:27:32.980: INFO: Waiting for Pod statefulset-8137/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Apr 20 17:27:42.764: INFO: Waiting for StatefulSet statefulset-8137/ss2 to complete update
Apr 20 17:27:42.764: INFO: Waiting for Pod statefulset-8137/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Apr 20 17:27:42.764: INFO: Waiting for Pod statefulset-8137/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Apr 20 17:27:52.767: INFO: Waiting for StatefulSet statefulset-8137/ss2 to complete update
Apr 20 17:27:52.767: INFO: Waiting for Pod statefulset-8137/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Apr 20 17:28:05.178: INFO: Waiting for StatefulSet statefulset-8137/ss2 to complete update
Apr 20 17:28:05.178: INFO: Waiting for Pod statefulset-8137/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Apr 20 17:28:12.965: INFO: Waiting for StatefulSet statefulset-8137/ss2 to complete update
Apr 20 17:28:12.965: INFO: Waiting for Pod statefulset-8137/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Apr 20 17:28:23.803: INFO: Waiting for StatefulSet statefulset-8137/ss2 to complete update
Apr 20 17:28:34.121: INFO: Waiting for StatefulSet statefulset-8137/ss2 to complete update
STEP: Rolling back to a previous revision
Apr 20 17:28:42.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8137 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Apr 20 17:28:43.768: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Apr 20 17:28:43.768: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Apr 20 17:28:43.768: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Apr 20 17:28:54.063: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Apr 20 17:29:04.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8137 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:29:04.990: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Apr 20 17:29:04.990: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Apr 20 17:29:04.990: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Apr 20 17:29:15.120: INFO: Waiting for StatefulSet statefulset-8137/ss2 to complete update
Apr 20 17:29:15.120: INFO: Waiting for Pod statefulset-8137/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Apr 20 17:29:15.120: INFO: Waiting for Pod statefulset-8137/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Apr 20 17:29:15.120: INFO: Waiting for Pod statefulset-8137/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Apr 20 17:29:25.480: INFO: Waiting for StatefulSet statefulset-8137/ss2 to complete update
Apr 20 17:29:25.480: INFO: Waiting for Pod statefulset-8137/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Apr 20 17:29:25.480: INFO: Waiting for Pod statefulset-8137/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Apr 20 17:29:36.491: INFO: Waiting for StatefulSet statefulset-8137/ss2 to complete update
Apr 20 17:29:36.491: INFO: Waiting for Pod statefulset-8137/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Apr 20 17:29:36.491: INFO: Waiting for Pod statefulset-8137/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Apr 20 17:29:45.229: INFO: Waiting for StatefulSet statefulset-8137/ss2 to complete update
Apr 20 17:29:45.229: INFO: Waiting for Pod statefulset-8137/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Apr 20 17:29:45.229: INFO: Waiting for Pod statefulset-8137/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Apr 20 17:29:55.130: INFO: Waiting for StatefulSet statefulset-8137/ss2 to complete update
Apr 20 17:29:55.131: INFO: Waiting for Pod statefulset-8137/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Apr 20 17:30:05.335: INFO: Waiting for StatefulSet statefulset-8137/ss2 to complete update
Apr 20 17:30:05.335: INFO: Waiting for Pod statefulset-8137/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Apr 20 17:30:15.126: INFO: Waiting for StatefulSet statefulset-8137/ss2 to complete update
Apr 20 17:30:15.126: INFO: Waiting for Pod statefulset-8137/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Apr 20 17:30:25.128: INFO: Waiting for StatefulSet statefulset-8137/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Apr 20 17:30:35.716: INFO: Deleting all statefulset in ns statefulset-8137
Apr 20 17:30:35.719: INFO: Scaling statefulset ss2 to 0
Apr 20 17:31:16.273: INFO: Waiting for statefulset status.replicas updated to 0
Apr 20 17:31:16.276: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:31:17.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8137" for this suite.
Apr 20 17:31:41.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:31:42.007: INFO: namespace statefulset-8137 deletion completed in 24.788214348s

• [SLOW TEST:322.555 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:31:42.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Apr 20 17:32:10.126: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Apr 20 17:32:10.550: INFO: Pod pod-with-poststart-http-hook still exists
Apr 20 17:32:12.550: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Apr 20 17:32:12.621: INFO: Pod pod-with-poststart-http-hook still exists
Apr 20 17:32:14.550: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Apr 20 17:32:14.554: INFO: Pod pod-with-poststart-http-hook still exists
Apr 20 17:32:16.550: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Apr 20 17:32:16.701: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:32:16.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8531" for this suite.
Apr 20 17:32:43.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:32:43.637: INFO: namespace container-lifecycle-hook-8531 deletion completed in 26.929167222s

• [SLOW TEST:61.630 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:32:43.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4150
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-4150
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4150
Apr 20 17:32:44.591: INFO: Found 0 stateful pods, waiting for 1
Apr 20 17:32:55.364: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Apr 20 17:33:04.595: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Apr 20 17:33:04.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Apr 20 17:33:43.943: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Apr 20 17:33:43.943: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Apr 20 17:33:43.943: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Apr 20 17:33:44.048: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Apr 20 17:33:54.234: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Apr 20 17:33:54.234: INFO: Waiting for statefulset status.replicas updated to 0
Apr 20 17:33:54.825: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Apr 20 17:33:54.825: INFO: ss-0  iruya-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:32:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:32:44 +0000 UTC  }]
Apr 20 17:33:54.825: INFO: 
Apr 20 17:33:54.825: INFO: StatefulSet ss has not reached scale 3, at 1
Apr 20 17:33:56.846: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.920199654s
Apr 20 17:33:59.210: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.899695637s
Apr 20 17:34:00.264: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.535609786s
Apr 20 17:34:01.390: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.481179284s
Apr 20 17:34:02.509: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.356103335s
Apr 20 17:34:03.515: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.236472071s
Apr 20 17:34:04.980: INFO: Verifying statefulset ss doesn't scale past 3 for another 230.973101ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4150
Apr 20 17:34:05.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:34:07.043: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Apr 20 17:34:07.043: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Apr 20 17:34:07.043: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Apr 20 17:34:07.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:34:08.411: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Apr 20 17:34:08.411: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Apr 20 17:34:08.411: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Apr 20 17:34:08.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:34:09.469: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Apr 20 17:34:09.469: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Apr 20 17:34:09.469: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Apr 20 17:34:09.661: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Apr 20 17:34:09.661: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=false
Apr 20 17:34:19.767: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Apr 20 17:34:19.767: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Apr 20 17:34:19.767: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Apr 20 17:34:19.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Apr 20 17:34:20.153: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Apr 20 17:34:20.153: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Apr 20 17:34:20.153: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Apr 20 17:34:20.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Apr 20 17:34:21.138: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Apr 20 17:34:21.138: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Apr 20 17:34:21.138: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Apr 20 17:34:21.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Apr 20 17:34:22.390: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Apr 20 17:34:22.390: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Apr 20 17:34:22.390: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Apr 20 17:34:22.390: INFO: Waiting for statefulset status.replicas updated to 0
Apr 20 17:34:22.407: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Apr 20 17:34:32.852: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Apr 20 17:34:32.852: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Apr 20 17:34:32.852: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Apr 20 17:34:32.869: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Apr 20 17:34:32.869: INFO: ss-0  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:32:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:32:44 +0000 UTC  }]
Apr 20 17:34:32.869: INFO: ss-1  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:54 +0000 UTC  }]
Apr 20 17:34:32.869: INFO: ss-2  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:55 +0000 UTC  }]
Apr 20 17:34:32.869: INFO: 
Apr 20 17:34:32.869: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 20 17:34:34.468: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Apr 20 17:34:34.468: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:32:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:32:44 +0000 UTC  }]
Apr 20 17:34:34.468: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:54 +0000 UTC  }]
Apr 20 17:34:34.468: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:55 +0000 UTC  }]
Apr 20 17:34:34.468: INFO: 
Apr 20 17:34:34.468: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 20 17:34:35.504: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Apr 20 17:34:35.504: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:32:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:32:44 +0000 UTC  }]
Apr 20 17:34:35.504: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:54 +0000 UTC  }]
Apr 20 17:34:35.504: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:55 +0000 UTC  }]
Apr 20 17:34:35.504: INFO: 
Apr 20 17:34:35.504: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 20 17:34:37.482: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Apr 20 17:34:37.482: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:32:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:32:44 +0000 UTC  }]
Apr 20 17:34:37.482: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:54 +0000 UTC  }]
Apr 20 17:34:37.482: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:55 +0000 UTC  }]
Apr 20 17:34:37.482: INFO: 
Apr 20 17:34:37.482: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 20 17:34:38.678: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Apr 20 17:34:38.678: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:32:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:32:44 +0000 UTC  }]
Apr 20 17:34:38.678: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:54 +0000 UTC  }]
Apr 20 17:34:38.678: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:55 +0000 UTC  }]
Apr 20 17:34:38.678: INFO: 
Apr 20 17:34:38.678: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 20 17:34:39.724: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Apr 20 17:34:39.724: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:32:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:32:44 +0000 UTC  }]
Apr 20 17:34:39.724: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:54 +0000 UTC  }]
Apr 20 17:34:39.724: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:55 +0000 UTC  }]
Apr 20 17:34:39.724: INFO: 
Apr 20 17:34:39.724: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 20 17:34:40.727: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Apr 20 17:34:40.727: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:32:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:32:44 +0000 UTC  }]
Apr 20 17:34:40.727: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:54 +0000 UTC  }]
Apr 20 17:34:40.727: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:55 +0000 UTC  }]
Apr 20 17:34:40.727: INFO: 
Apr 20 17:34:40.727: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 20 17:34:42.235: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Apr 20 17:34:42.235: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:32:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:32:44 +0000 UTC  }]
Apr 20 17:34:42.235: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:54 +0000 UTC  }]
Apr 20 17:34:42.235: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:34:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:33:55 +0000 UTC  }]
Apr 20 17:34:42.235: INFO: 
Apr 20 17:34:42.235: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4150
Apr 20 17:34:43.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:34:43.911: INFO: rc: 1
Apr 20 17:34:43.911: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc002e4b950 exit status 1   true [0xc0034720b8 0xc0034720d0 0xc0034720e8] [0xc0034720b8 0xc0034720d0 0xc0034720e8] [0xc0034720c8 0xc0034720e0] [0xba70e0 0xba70e0] 0xc0029497a0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Apr 20 17:34:53.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:34:54.433: INFO: rc: 1
Apr 20 17:34:54.433: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0030ee4b0 exit status 1   true [0xc0009c3780 0xc0009c3880 0xc0009c3930] [0xc0009c3780 0xc0009c3880 0xc0009c3930] [0xc0009c3818 0xc0009c3910] [0xba70e0 0xba70e0] 0xc00362d9e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:35:04.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:35:04.698: INFO: rc: 1
Apr 20 17:35:04.698: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e4ba10 exit status 1   true [0xc0034720f0 0xc003472108 0xc003472120] [0xc0034720f0 0xc003472108 0xc003472120] [0xc003472100 0xc003472118] [0xba70e0 0xba70e0] 0xc002a6a480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:35:14.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:35:14.782: INFO: rc: 1
Apr 20 17:35:14.782: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0030ee5a0 exit status 1   true [0xc0009c39d8 0xc0009c3a68 0xc0009c3b90] [0xc0009c39d8 0xc0009c3a68 0xc0009c3b90] [0xc0009c3a30 0xc0009c3af0] [0xba70e0 0xba70e0] 0xc00297e960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:35:24.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:35:24.879: INFO: rc: 1
Apr 20 17:35:24.879: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e4bb00 exit status 1   true [0xc003472128 0xc003472140 0xc003472158] [0xc003472128 0xc003472140 0xc003472158] [0xc003472138 0xc003472150] [0xba70e0 0xba70e0] 0xc002c5a000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:35:34.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:35:34.978: INFO: rc: 1
Apr 20 17:35:34.979: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024be090 exit status 1   true [0xc0001a2ec8 0xc0001a31c8 0xc0001a3388] [0xc0001a2ec8 0xc0001a31c8 0xc0001a3388] [0xc0001a3110 0xc0001a3340] [0xba70e0 0xba70e0] 0xc002a6b7a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:35:44.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:35:45.073: INFO: rc: 1
Apr 20 17:35:45.073: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000670d50 exit status 1   true [0xc00054c048 0xc003472010 0xc003472028] [0xc00054c048 0xc003472010 0xc003472028] [0xc003472008 0xc003472020] [0xba70e0 0xba70e0] 0xc002948780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:35:55.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:35:55.369: INFO: rc: 1
Apr 20 17:35:55.369: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000670f30 exit status 1   true [0xc003472030 0xc003472048 0xc003472060] [0xc003472030 0xc003472048 0xc003472060] [0xc003472040 0xc003472058] [0xba70e0 0xba70e0] 0xc002948f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:36:05.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:36:05.455: INFO: rc: 1
Apr 20 17:36:05.456: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000671020 exit status 1   true [0xc003472068 0xc003472080 0xc003472098] [0xc003472068 0xc003472080 0xc003472098] [0xc003472078 0xc003472090] [0xba70e0 0xba70e0] 0xc002949860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:36:15.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:36:15.845: INFO: rc: 1
Apr 20 17:36:15.845: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0032e0090 exit status 1   true [0xc000752020 0xc0007520d0 0xc000752148] [0xc000752020 0xc0007520d0 0xc000752148] [0xc000752088 0xc000752138] [0xba70e0 0xba70e0] 0xc0035aa720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:36:25.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:36:26.611: INFO: rc: 1
Apr 20 17:36:26.611: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0032e0150 exit status 1   true [0xc0007521a8 0xc000752288 0xc000752350] [0xc0007521a8 0xc000752288 0xc000752350] [0xc000752248 0xc000752328] [0xba70e0 0xba70e0] 0xc0035ab5c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:36:36.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:36:37.058: INFO: rc: 1
Apr 20 17:36:37.059: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0009b4f30 exit status 1   true [0xc000758050 0xc000758218 0xc000758610] [0xc000758050 0xc000758218 0xc000758610] [0xc0007581b8 0xc0007583f0] [0xba70e0 0xba70e0] 0xc002419620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:36:47.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:36:47.564: INFO: rc: 1
Apr 20 17:36:47.564: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0009b4ff0 exit status 1   true [0xc0007586d0 0xc000758a98 0xc000758d58] [0xc0007586d0 0xc000758a98 0xc000758d58] [0xc000758a58 0xc000758c40] [0xba70e0 0xba70e0] 0xc00362c4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:36:57.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:36:57.662: INFO: rc: 1
Apr 20 17:36:57.662: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000671170 exit status 1   true [0xc0034720a0 0xc0034720b8 0xc0034720d0] [0xc0034720a0 0xc0034720b8 0xc0034720d0] [0xc0034720b0 0xc0034720c8] [0xba70e0 0xba70e0] 0xc0026eed80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:37:07.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:37:07.758: INFO: rc: 1
Apr 20 17:37:07.758: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0009b50b0 exit status 1   true [0xc000758dd8 0xc000759018 0xc000759170] [0xc000758dd8 0xc000759018 0xc000759170] [0xc000758f70 0xc0007590e0] [0xba70e0 0xba70e0] 0xc00362cc00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:37:17.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:37:17.853: INFO: rc: 1
Apr 20 17:37:17.853: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0009b5170 exit status 1   true [0xc0007591d0 0xc0007592e8 0xc000759418] [0xc0007591d0 0xc0007592e8 0xc000759418] [0xc0007592b0 0xc000759390] [0xba70e0 0xba70e0] 0xc00362d5c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:37:27.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:37:28.257: INFO: rc: 1
Apr 20 17:37:28.257: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000671320 exit status 1   true [0xc0034720d8 0xc0034720f0 0xc003472108] [0xc0034720d8 0xc0034720f0 0xc003472108] [0xc0034720e8 0xc003472100] [0xba70e0 0xba70e0] 0xc002462540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:37:38.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:37:39.331: INFO: rc: 1
Apr 20 17:37:39.331: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024be0c0 exit status 1   true [0xc0001a2000 0xc0001a3110 0xc0001a3340] [0xc0001a2000 0xc0001a3110 0xc0001a3340] [0xc0001a2f98 0xc0001a32a0] [0xba70e0 0xba70e0] 0xc002418360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:37:49.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:37:49.530: INFO: rc: 1
Apr 20 17:37:49.530: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024be1b0 exit status 1   true [0xc0001a3388 0xc0001a3438 0xc0001a3658] [0xc0001a3388 0xc0001a3438 0xc0001a3658] [0xc0001a3410 0xc0001a35a0] [0xba70e0 0xba70e0] 0xc0014bdb00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:37:59.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:37:59.637: INFO: rc: 1
Apr 20 17:37:59.637: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024be2a0 exit status 1   true [0xc0001a3698 0xc0001a3860 0xc0001a39d8] [0xc0001a3698 0xc0001a3860 0xc0001a39d8] [0xc0001a3800 0xc0001a38c8] [0xba70e0 0xba70e0] 0xc002948900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:38:09.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:38:09.911: INFO: rc: 1
Apr 20 17:38:09.911: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024be3c0 exit status 1   true [0xc0001a3aa0 0xc0001a3d08 0xc0001a3e20] [0xc0001a3aa0 0xc0001a3d08 0xc0001a3e20] [0xc0001a3cc8 0xc0001a3df0] [0xba70e0 0xba70e0] 0xc002949080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:38:19.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:38:20.092: INFO: rc: 1
Apr 20 17:38:20.092: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024be480 exit status 1   true [0xc0001a3e60 0xc0001a3f88 0xc000758198] [0xc0001a3e60 0xc0001a3f88 0xc000758198] [0xc0001a3ec8 0xc000758050] [0xba70e0 0xba70e0] 0xc0029498c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:38:30.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:38:30.181: INFO: rc: 1
Apr 20 17:38:30.181: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0009b4f00 exit status 1   true [0xc003472000 0xc003472018 0xc003472030] [0xc003472000 0xc003472018 0xc003472030] [0xc003472010 0xc003472028] [0xba70e0 0xba70e0] 0xc002a6b7a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:38:40.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:38:40.268: INFO: rc: 1
Apr 20 17:38:40.269: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000670420 exit status 1   true [0xc000752020 0xc0007520d0 0xc000752148] [0xc000752020 0xc0007520d0 0xc000752148] [0xc000752088 0xc000752138] [0xba70e0 0xba70e0] 0xc00362c540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:38:50.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:38:50.375: INFO: rc: 1
Apr 20 17:38:50.375: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024be570 exit status 1   true [0xc0007581b8 0xc0007583f0 0xc000758790] [0xc0007581b8 0xc0007583f0 0xc000758790] [0xc000758280 0xc0007586d0] [0xba70e0 0xba70e0] 0xc0024623c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:39:00.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:39:00.479: INFO: rc: 1
Apr 20 17:39:00.479: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024be630 exit status 1   true [0xc000758a58 0xc000758c40 0xc000758e70] [0xc000758a58 0xc000758c40 0xc000758e70] [0xc000758be8 0xc000758dd8] [0xba70e0 0xba70e0] 0xc002462900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:39:10.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:39:10.648: INFO: rc: 1
Apr 20 17:39:10.648: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024be720 exit status 1   true [0xc000758f70 0xc0007590e0 0xc000759260] [0xc000758f70 0xc0007590e0 0xc000759260] [0xc0007590c0 0xc0007591d0] [0xba70e0 0xba70e0] 0xc002462f60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:39:20.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:39:20.803: INFO: rc: 1
Apr 20 17:39:20.803: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000670ed0 exit status 1   true [0xc0007521a8 0xc000752288 0xc000752350] [0xc0007521a8 0xc000752288 0xc000752350] [0xc000752248 0xc000752328] [0xba70e0 0xba70e0] 0xc00362cc60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:39:30.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:39:31.061: INFO: rc: 1
Apr 20 17:39:31.061: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000671050 exit status 1   true [0xc000752398 0xc000752450 0xc0007524b8] [0xc000752398 0xc000752450 0xc0007524b8] [0xc000752428 0xc000752498] [0xba70e0 0xba70e0] 0xc00362d620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:39:41.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:39:41.172: INFO: rc: 1
Apr 20 17:39:41.172: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024be090 exit status 1   true [0xc0001a2ec8 0xc0001a31c8 0xc0001a3388] [0xc0001a2ec8 0xc0001a31c8 0xc0001a3388] [0xc0001a3110 0xc0001a3340] [0xba70e0 0xba70e0] 0xc002948780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Apr 20 17:39:51.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4150 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 20 17:39:52.067: INFO: rc: 1
Apr 20 17:39:52.067: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Apr 20 17:39:52.067: INFO: Scaling statefulset ss to 0
Apr 20 17:39:52.693: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Apr 20 17:39:52.695: INFO: Deleting all statefulset in ns statefulset-4150
Apr 20 17:39:52.697: INFO: Scaling statefulset ss to 0
Apr 20 17:39:52.864: INFO: Waiting for statefulset status.replicas updated to 0
Apr 20 17:39:52.866: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:39:52.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4150" for this suite.
Apr 20 17:40:11.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:40:11.803: INFO: namespace statefulset-4150 deletion completed in 18.682926924s

• [SLOW TEST:448.166 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:40:11.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-95d58791-3c24-4c63-bbb5-6b749ab509e8
STEP: Creating a pod to test consume configMaps
Apr 20 17:40:14.117: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-95ec5955-2a93-434b-b83e-12da929bc27c" in namespace "projected-9683" to be "success or failure"
Apr 20 17:40:14.144: INFO: Pod "pod-projected-configmaps-95ec5955-2a93-434b-b83e-12da929bc27c": Phase="Pending", Reason="", readiness=false. Elapsed: 27.338056ms
Apr 20 17:40:16.437: INFO: Pod "pod-projected-configmaps-95ec5955-2a93-434b-b83e-12da929bc27c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320527851s
Apr 20 17:40:18.443: INFO: Pod "pod-projected-configmaps-95ec5955-2a93-434b-b83e-12da929bc27c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326081479s
Apr 20 17:40:20.587: INFO: Pod "pod-projected-configmaps-95ec5955-2a93-434b-b83e-12da929bc27c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.47010509s
Apr 20 17:40:23.356: INFO: Pod "pod-projected-configmaps-95ec5955-2a93-434b-b83e-12da929bc27c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.239303009s
Apr 20 17:40:25.714: INFO: Pod "pod-projected-configmaps-95ec5955-2a93-434b-b83e-12da929bc27c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.597538245s
Apr 20 17:40:27.761: INFO: Pod "pod-projected-configmaps-95ec5955-2a93-434b-b83e-12da929bc27c": Phase="Running", Reason="", readiness=true. Elapsed: 13.643864524s
Apr 20 17:40:29.764: INFO: Pod "pod-projected-configmaps-95ec5955-2a93-434b-b83e-12da929bc27c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.64737691s
STEP: Saw pod success
Apr 20 17:40:29.764: INFO: Pod "pod-projected-configmaps-95ec5955-2a93-434b-b83e-12da929bc27c" satisfied condition "success or failure"
Apr 20 17:40:29.767: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-95ec5955-2a93-434b-b83e-12da929bc27c container projected-configmap-volume-test: 
STEP: delete the pod
Apr 20 17:40:29.942: INFO: Waiting for pod pod-projected-configmaps-95ec5955-2a93-434b-b83e-12da929bc27c to disappear
Apr 20 17:40:30.150: INFO: Pod pod-projected-configmaps-95ec5955-2a93-434b-b83e-12da929bc27c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:40:30.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9683" for this suite.
Apr 20 17:40:38.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:40:38.845: INFO: namespace projected-9683 deletion completed in 8.69043176s

• [SLOW TEST:27.041 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:40:38.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0420 17:41:19.798034       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Apr 20 17:41:19.798: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:41:19.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1648" for this suite.
Apr 20 17:41:35.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:41:35.974: INFO: namespace gc-1648 deletion completed in 16.173044301s

• [SLOW TEST:57.128 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:41:35.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Apr 20 17:41:36.688: INFO: Waiting up to 5m0s for pod "downward-api-57f604c5-42d5-483b-b1e7-c0bfa7e47aba" in namespace "downward-api-6352" to be "success or failure"
Apr 20 17:41:36.721: INFO: Pod "downward-api-57f604c5-42d5-483b-b1e7-c0bfa7e47aba": Phase="Pending", Reason="", readiness=false. Elapsed: 33.284121ms
Apr 20 17:41:38.726: INFO: Pod "downward-api-57f604c5-42d5-483b-b1e7-c0bfa7e47aba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037774625s
Apr 20 17:41:40.729: INFO: Pod "downward-api-57f604c5-42d5-483b-b1e7-c0bfa7e47aba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041326021s
Apr 20 17:41:42.734: INFO: Pod "downward-api-57f604c5-42d5-483b-b1e7-c0bfa7e47aba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045989792s
Apr 20 17:41:44.816: INFO: Pod "downward-api-57f604c5-42d5-483b-b1e7-c0bfa7e47aba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127693227s
Apr 20 17:41:46.821: INFO: Pod "downward-api-57f604c5-42d5-483b-b1e7-c0bfa7e47aba": Phase="Pending", Reason="", readiness=false. Elapsed: 10.132996513s
Apr 20 17:41:49.308: INFO: Pod "downward-api-57f604c5-42d5-483b-b1e7-c0bfa7e47aba": Phase="Running", Reason="", readiness=true. Elapsed: 12.619904608s
Apr 20 17:41:51.354: INFO: Pod "downward-api-57f604c5-42d5-483b-b1e7-c0bfa7e47aba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.666249705s
STEP: Saw pod success
Apr 20 17:41:51.354: INFO: Pod "downward-api-57f604c5-42d5-483b-b1e7-c0bfa7e47aba" satisfied condition "success or failure"
Apr 20 17:41:51.511: INFO: Trying to get logs from node iruya-worker pod downward-api-57f604c5-42d5-483b-b1e7-c0bfa7e47aba container dapi-container: 
STEP: delete the pod
Apr 20 17:41:51.599: INFO: Waiting for pod downward-api-57f604c5-42d5-483b-b1e7-c0bfa7e47aba to disappear
Apr 20 17:41:51.757: INFO: Pod downward-api-57f604c5-42d5-483b-b1e7-c0bfa7e47aba no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:41:51.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6352" for this suite.
Apr 20 17:41:57.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:41:57.878: INFO: namespace downward-api-6352 deletion completed in 6.115767795s

• [SLOW TEST:21.903 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:41:57.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Apr 20 17:41:58.357: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Apr 20 17:41:58.389: INFO: Waiting for terminating namespaces to be deleted...
Apr 20 17:41:58.397: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Apr 20 17:41:58.401: INFO: kube-proxy-qp6db from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded)
Apr 20 17:41:58.401: INFO: 	Container kube-proxy ready: true, restart count 0
Apr 20 17:41:58.401: INFO: kindnet-7fbjm from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded)
Apr 20 17:41:58.401: INFO: 	Container kindnet-cni ready: true, restart count 0
Apr 20 17:41:58.401: INFO: chaos-daemon-kbww4 from default started at 2021-04-13 15:47:46 +0000 UTC (1 container statuses recorded)
Apr 20 17:41:58.401: INFO: 	Container chaos-daemon ready: true, restart count 0
Apr 20 17:41:58.401: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Apr 20 17:41:58.405: INFO: chaos-controller-manager-6c68f56f79-plhrb from default started at 2021-04-13 15:47:46 +0000 UTC (1 container statuses recorded)
Apr 20 17:41:58.405: INFO: 	Container chaos-mesh ready: true, restart count 0
Apr 20 17:41:58.405: INFO: kindnet-nxsfn from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded)
Apr 20 17:41:58.405: INFO: 	Container kindnet-cni ready: true, restart count 0
Apr 20 17:41:58.405: INFO: kube-proxy-pz4cr from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded)
Apr 20 17:41:58.405: INFO: 	Container kube-proxy ready: true, restart count 0
Apr 20 17:41:58.405: INFO: chaos-daemon-5nrq6 from default started at 2021-04-13 15:47:46 +0000 UTC (1 container statuses recorded)
Apr 20 17:41:58.405: INFO: 	Container chaos-daemon ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-b7dcce76-15ec-40f4-99e8-b6f57739e11b 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-b7dcce76-15ec-40f4-99e8-b6f57739e11b off the node iruya-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-b7dcce76-15ec-40f4-99e8-b6f57739e11b
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:42:13.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2481" for this suite.
Apr 20 17:42:23.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:42:24.059: INFO: namespace sched-pred-2481 deletion completed in 10.485574459s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:26.181 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:42:24.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-ab24dfe3-a93c-468b-ae36-2ffa838dd9bc
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:42:24.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8367" for this suite.
Apr 20 17:42:32.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:42:32.625: INFO: namespace secrets-8367 deletion completed in 8.333035767s

• [SLOW TEST:8.566 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:42:32.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-8949
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8949 to expose endpoints map[]
Apr 20 17:42:32.852: INFO: successfully validated that service multi-endpoint-test in namespace services-8949 exposes endpoints map[] (17.573976ms elapsed)
STEP: Creating pod pod1 in namespace services-8949
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8949 to expose endpoints map[pod1:[100]]
Apr 20 17:42:37.318: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.456432315s elapsed, will retry)
Apr 20 17:42:38.324: INFO: successfully validated that service multi-endpoint-test in namespace services-8949 exposes endpoints map[pod1:[100]] (5.46165944s elapsed)
STEP: Creating pod pod2 in namespace services-8949
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8949 to expose endpoints map[pod1:[100] pod2:[101]]
Apr 20 17:42:42.606: INFO: Unexpected endpoints: found map[b6922732-34fa-45fa-828c-cc5cfe322e3e:[100]], expected map[pod1:[100] pod2:[101]] (4.279226457s elapsed, will retry)
Apr 20 17:42:43.616: INFO: successfully validated that service multi-endpoint-test in namespace services-8949 exposes endpoints map[pod1:[100] pod2:[101]] (5.289115459s elapsed)
STEP: Deleting pod pod1 in namespace services-8949
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8949 to expose endpoints map[pod2:[101]]
Apr 20 17:42:44.896: INFO: successfully validated that service multi-endpoint-test in namespace services-8949 exposes endpoints map[pod2:[101]] (1.275912457s elapsed)
STEP: Deleting pod pod2 in namespace services-8949
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8949 to expose endpoints map[]
Apr 20 17:42:45.308: INFO: successfully validated that service multi-endpoint-test in namespace services-8949 exposes endpoints map[] (344.111829ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:42:46.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8949" for this suite.
Apr 20 17:43:11.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:43:11.443: INFO: namespace services-8949 deletion completed in 24.780939481s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:38.817 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:43:11.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Apr 20 17:43:11.946: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Apr 20 17:43:12.016: INFO: Pod name sample-pod: Found 0 pods out of 1
Apr 20 17:43:17.046: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Apr 20 17:43:21.053: INFO: Creating deployment "test-rolling-update-deployment"
Apr 20 17:43:21.058: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Apr 20 17:43:21.097: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Apr 20 17:43:23.105: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Apr 20 17:43:23.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537401, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537401, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537401, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537401, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 20 17:43:25.494: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537401, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537401, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537401, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537401, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 20 17:43:27.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537401, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537401, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537401, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537401, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 20 17:43:29.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537401, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537401, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537408, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537401, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 20 17:43:31.224: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Apr 20 17:43:31.232: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-3789,SelfLink:/apis/apps/v1/namespaces/deployment-3789/deployments/test-rolling-update-deployment,UID:24d3c6a1-15c1-4eb5-b8f3-cc17de0592fb,ResourceVersion:1310844,Generation:1,CreationTimestamp:2021-04-20 17:43:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2021-04-20 17:43:21 +0000 UTC 2021-04-20 17:43:21 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2021-04-20 17:43:29 +0000 UTC 2021-04-20 17:43:21 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Apr 20 17:43:31.234: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-3789,SelfLink:/apis/apps/v1/namespaces/deployment-3789/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:e91015d2-393f-4b0a-8162-e418d63743d3,ResourceVersion:1310830,Generation:1,CreationTimestamp:2021-04-20 17:43:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 24d3c6a1-15c1-4eb5-b8f3-cc17de0592fb 0xc001944a57 0xc001944a58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Apr 20 17:43:31.234: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Apr 20 17:43:31.234: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-3789,SelfLink:/apis/apps/v1/namespaces/deployment-3789/replicasets/test-rolling-update-controller,UID:3663f69c-4ad1-465c-87db-7c362906425c,ResourceVersion:1310843,Generation:2,CreationTimestamp:2021-04-20 17:43:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 24d3c6a1-15c1-4eb5-b8f3-cc17de0592fb 0xc001944977 0xc001944978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Apr 20 17:43:31.237: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-qx9gx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-qx9gx,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-3789,SelfLink:/api/v1/namespaces/deployment-3789/pods/test-rolling-update-deployment-79f6b9d75c-qx9gx,UID:00dfce29-f877-4bca-8248-f66629b54feb,ResourceVersion:1310829,Generation:0,CreationTimestamp:2021-04-20 17:43:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c e91015d2-393f-4b0a-8162-e418d63743d3 0xc002693e87 0xc002693e88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z8lq4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z8lq4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-z8lq4 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002693f10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002693f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:43:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:43:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:43:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:43:21 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.127,StartTime:2021-04-20 17:43:21 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2021-04-20 17:43:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://0fde5ace4c891366fc3a14f61e2b7c3ad6f750e9af55bdfd9cc0c07ab95947be}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:43:31.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3789" for this suite.
Apr 20 17:43:39.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:43:39.465: INFO: namespace deployment-3789 deletion completed in 8.225672014s

• [SLOW TEST:28.022 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:43:39.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:43:44.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-5471" for this suite.
Apr 20 17:43:50.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:43:50.274: INFO: namespace emptydir-wrapper-5471 deletion completed in 6.13026514s

• [SLOW TEST:10.809 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:43:50.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-786/configmap-test-b1a97e2f-cca0-4297-98d7-6bcefbacee96
STEP: Creating a pod to test consume configMaps
Apr 20 17:43:50.791: INFO: Waiting up to 5m0s for pod "pod-configmaps-1ddd6367-28ef-4347-9a55-75b6cffbfc2c" in namespace "configmap-786" to be "success or failure"
Apr 20 17:43:50.908: INFO: Pod "pod-configmaps-1ddd6367-28ef-4347-9a55-75b6cffbfc2c": Phase="Pending", Reason="", readiness=false. Elapsed: 117.730646ms
Apr 20 17:43:52.912: INFO: Pod "pod-configmaps-1ddd6367-28ef-4347-9a55-75b6cffbfc2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121333792s
Apr 20 17:43:54.915: INFO: Pod "pod-configmaps-1ddd6367-28ef-4347-9a55-75b6cffbfc2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124704159s
Apr 20 17:43:56.919: INFO: Pod "pod-configmaps-1ddd6367-28ef-4347-9a55-75b6cffbfc2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.127925735s
STEP: Saw pod success
Apr 20 17:43:56.919: INFO: Pod "pod-configmaps-1ddd6367-28ef-4347-9a55-75b6cffbfc2c" satisfied condition "success or failure"
Apr 20 17:43:56.921: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-1ddd6367-28ef-4347-9a55-75b6cffbfc2c container env-test: 
STEP: delete the pod
Apr 20 17:43:56.943: INFO: Waiting for pod pod-configmaps-1ddd6367-28ef-4347-9a55-75b6cffbfc2c to disappear
Apr 20 17:43:57.003: INFO: Pod pod-configmaps-1ddd6367-28ef-4347-9a55-75b6cffbfc2c no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:43:57.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-786" for this suite.
Apr 20 17:44:03.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:44:03.118: INFO: namespace configmap-786 deletion completed in 6.111564752s

• [SLOW TEST:12.844 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:44:03.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Apr 20 17:44:03.194: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Apr 20 17:44:09.513: INFO: Waiting up to 5m0s for pod "pod-995eab3f-d3f7-4e94-a24f-967948a6a217" in namespace "emptydir-5590" to be "success or failure"
Apr 20 17:44:09.517: INFO: Pod "pod-995eab3f-d3f7-4e94-a24f-967948a6a217": Phase="Pending", Reason="", readiness=false. Elapsed: 4.660182ms
Apr 20 17:44:11.609: INFO: Pod "pod-995eab3f-d3f7-4e94-a24f-967948a6a217": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096455037s
Apr 20 17:44:13.650: INFO: Pod "pod-995eab3f-d3f7-4e94-a24f-967948a6a217": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137398401s
Apr 20 17:44:15.654: INFO: Pod "pod-995eab3f-d3f7-4e94-a24f-967948a6a217": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.141582527s
STEP: Saw pod success
Apr 20 17:44:15.654: INFO: Pod "pod-995eab3f-d3f7-4e94-a24f-967948a6a217" satisfied condition "success or failure"
Apr 20 17:44:15.657: INFO: Trying to get logs from node iruya-worker pod pod-995eab3f-d3f7-4e94-a24f-967948a6a217 container test-container: 
STEP: delete the pod
Apr 20 17:44:15.731: INFO: Waiting for pod pod-995eab3f-d3f7-4e94-a24f-967948a6a217 to disappear
Apr 20 17:44:15.884: INFO: Pod pod-995eab3f-d3f7-4e94-a24f-967948a6a217 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:44:15.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5590" for this suite.
Apr 20 17:44:22.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:44:22.145: INFO: namespace emptydir-5590 deletion completed in 6.256034055s

• [SLOW TEST:12.787 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:44:22.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Apr 20 17:44:22.222: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d399cf82-718a-45d5-99cb-6bd583bf6741" in namespace "downward-api-2083" to be "success or failure"
Apr 20 17:44:22.254: INFO: Pod "downwardapi-volume-d399cf82-718a-45d5-99cb-6bd583bf6741": Phase="Pending", Reason="", readiness=false. Elapsed: 32.477258ms
Apr 20 17:44:24.459: INFO: Pod "downwardapi-volume-d399cf82-718a-45d5-99cb-6bd583bf6741": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236902551s
Apr 20 17:44:26.527: INFO: Pod "downwardapi-volume-d399cf82-718a-45d5-99cb-6bd583bf6741": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.305408905s
STEP: Saw pod success
Apr 20 17:44:26.527: INFO: Pod "downwardapi-volume-d399cf82-718a-45d5-99cb-6bd583bf6741" satisfied condition "success or failure"
Apr 20 17:44:26.539: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d399cf82-718a-45d5-99cb-6bd583bf6741 container client-container: 
STEP: delete the pod
Apr 20 17:44:26.662: INFO: Waiting for pod downwardapi-volume-d399cf82-718a-45d5-99cb-6bd583bf6741 to disappear
Apr 20 17:44:26.676: INFO: Pod downwardapi-volume-d399cf82-718a-45d5-99cb-6bd583bf6741 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:44:26.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2083" for this suite.
Apr 20 17:44:32.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:44:32.785: INFO: namespace downward-api-2083 deletion completed in 6.10441346s

• [SLOW TEST:10.640 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:44:32.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-7f7c75f3-9459-42b4-9fa4-3a457a7a9356
STEP: Creating a pod to test consume secrets
Apr 20 17:44:32.888: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7680b0c7-f9a9-4894-9ca1-6340b0002938" in namespace "projected-87" to be "success or failure"
Apr 20 17:44:32.892: INFO: Pod "pod-projected-secrets-7680b0c7-f9a9-4894-9ca1-6340b0002938": Phase="Pending", Reason="", readiness=false. Elapsed: 3.923238ms
Apr 20 17:44:34.894: INFO: Pod "pod-projected-secrets-7680b0c7-f9a9-4894-9ca1-6340b0002938": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006799998s
Apr 20 17:44:36.899: INFO: Pod "pod-projected-secrets-7680b0c7-f9a9-4894-9ca1-6340b0002938": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010990818s
STEP: Saw pod success
Apr 20 17:44:36.899: INFO: Pod "pod-projected-secrets-7680b0c7-f9a9-4894-9ca1-6340b0002938" satisfied condition "success or failure"
Apr 20 17:44:36.901: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-7680b0c7-f9a9-4894-9ca1-6340b0002938 container projected-secret-volume-test: 
STEP: delete the pod
Apr 20 17:44:36.934: INFO: Waiting for pod pod-projected-secrets-7680b0c7-f9a9-4894-9ca1-6340b0002938 to disappear
Apr 20 17:44:36.946: INFO: Pod pod-projected-secrets-7680b0c7-f9a9-4894-9ca1-6340b0002938 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:44:36.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-87" for this suite.
Apr 20 17:44:42.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:44:43.051: INFO: namespace projected-87 deletion completed in 6.102328037s

• [SLOW TEST:10.266 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:44:43.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Apr 20 17:44:43.117: INFO: Pod name rollover-pod: Found 0 pods out of 1
Apr 20 17:44:48.122: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Apr 20 17:44:48.122: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Apr 20 17:44:50.127: INFO: Creating deployment "test-rollover-deployment"
Apr 20 17:44:50.141: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Apr 20 17:44:52.148: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Apr 20 17:44:52.155: INFO: Ensure that both replica sets have 1 created replica
Apr 20 17:44:52.159: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Apr 20 17:44:52.163: INFO: Updating deployment test-rollover-deployment
Apr 20 17:44:52.163: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Apr 20 17:44:54.218: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Apr 20 17:44:54.225: INFO: Make sure deployment "test-rollover-deployment" is complete
Apr 20 17:44:54.232: INFO: all replica sets need to contain the pod-template-hash label
Apr 20 17:44:54.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537490, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537490, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537492, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537490, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 20 17:44:56.240: INFO: all replica sets need to contain the pod-template-hash label
Apr 20 17:44:56.240: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537490, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537490, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537495, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537490, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 20 17:44:58.239: INFO: all replica sets need to contain the pod-template-hash label
Apr 20 17:44:58.240: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537490, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537490, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537495, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537490, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 20 17:45:00.239: INFO: all replica sets need to contain the pod-template-hash label
Apr 20 17:45:00.239: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537490, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537490, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537495, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537490, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 20 17:45:02.240: INFO: all replica sets need to contain the pod-template-hash label
Apr 20 17:45:02.240: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537490, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537490, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537495, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537490, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 20 17:45:04.240: INFO: all replica sets need to contain the pod-template-hash label
Apr 20 17:45:04.240: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537490, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537490, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537495, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63754537490, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 20 17:45:06.239: INFO: 
Apr 20 17:45:06.239: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Apr 20 17:45:06.247: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-6090,SelfLink:/apis/apps/v1/namespaces/deployment-6090/deployments/test-rollover-deployment,UID:cf82be51-3463-4471-bb1e-751b2bf8e5d1,ResourceVersion:1311261,Generation:2,CreationTimestamp:2021-04-20 17:44:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2021-04-20 17:44:50 +0000 UTC 2021-04-20 17:44:50 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2021-04-20 17:45:05 +0000 UTC 2021-04-20 17:44:50 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Apr 20 17:45:06.250: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-6090,SelfLink:/apis/apps/v1/namespaces/deployment-6090/replicasets/test-rollover-deployment-854595fc44,UID:bc81f2e6-bdcc-43c6-9d91-cc4e9c201e1c,ResourceVersion:1311250,Generation:2,CreationTimestamp:2021-04-20 17:44:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment cf82be51-3463-4471-bb1e-751b2bf8e5d1 0xc003804c67 0xc003804c68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Apr 20 17:45:06.250: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Apr 20 17:45:06.251: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-6090,SelfLink:/apis/apps/v1/namespaces/deployment-6090/replicasets/test-rollover-controller,UID:09b6230f-b1e6-4cd3-bd0d-b2abe5a54445,ResourceVersion:1311259,Generation:2,CreationTimestamp:2021-04-20 17:44:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment cf82be51-3463-4471-bb1e-751b2bf8e5d1 0xc003804b97 0xc003804b98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Apr 20 17:45:06.251: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-6090,SelfLink:/apis/apps/v1/namespaces/deployment-6090/replicasets/test-rollover-deployment-9b8b997cf,UID:6466e4d7-ad40-44c7-aaea-ca5e22469368,ResourceVersion:1311216,Generation:2,CreationTimestamp:2021-04-20 17:44:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment cf82be51-3463-4471-bb1e-751b2bf8e5d1 0xc003804d30 0xc003804d31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Apr 20 17:45:06.254: INFO: Pod "test-rollover-deployment-854595fc44-b9mgg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-b9mgg,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-6090,SelfLink:/api/v1/namespaces/deployment-6090/pods/test-rollover-deployment-854595fc44-b9mgg,UID:fd5c582a-5c86-493c-abb4-b8d2ed8f484e,ResourceVersion:1311228,Generation:0,CreationTimestamp:2021-04-20 17:44:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 bc81f2e6-bdcc-43c6-9d91-cc4e9c201e1c 0xc0038058f7 0xc0038058f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-b6xrm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-b6xrm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-b6xrm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003805970} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003805990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:44:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:44:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:44:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-20 17:44:52 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.60,StartTime:2021-04-20 17:44:52 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2021-04-20 17:44:54 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://a8ea3ee737aacee331176ae05f1b5b867d393bd7ac4f67bdca4d230d2e9841f5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:45:06.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6090" for this suite.
Apr 20 17:45:12.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:45:12.420: INFO: namespace deployment-6090 deletion completed in 6.163112085s

• [SLOW TEST:29.368 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:45:12.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-62dac227-71c2-4b76-a63e-61b3ec90319f
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-62dac227-71c2-4b76-a63e-61b3ec90319f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:45:18.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2928" for this suite.
Apr 20 17:45:40.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:45:41.058: INFO: namespace configmap-2928 deletion completed in 22.179344011s

• [SLOW TEST:28.638 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:45:41.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Apr 20 17:45:41.128: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:45:59.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6544" for this suite.
Apr 20 17:46:07.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:46:07.597: INFO: namespace pods-6544 deletion completed in 8.110338585s

• [SLOW TEST:26.538 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:46:07.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-abcc7994-303b-41c6-aeaa-606f62f202bb
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:46:14.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4600" for this suite.
Apr 20 17:46:36.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:46:36.601: INFO: namespace configmap-4600 deletion completed in 22.146427719s

• [SLOW TEST:29.003 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:46:36.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Apr 20 17:46:36.687: INFO: Create a RollingUpdate DaemonSet
Apr 20 17:46:36.691: INFO: Check that daemon pods launch on every node of the cluster
Apr 20 17:46:36.722: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 20 17:46:36.727: INFO: Number of nodes with available pods: 0
Apr 20 17:46:36.727: INFO: Node iruya-worker is running more than one daemon pod
Apr 20 17:46:37.732: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 20 17:46:37.735: INFO: Number of nodes with available pods: 0
Apr 20 17:46:37.735: INFO: Node iruya-worker is running more than one daemon pod
Apr 20 17:46:38.732: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 20 17:46:38.736: INFO: Number of nodes with available pods: 0
Apr 20 17:46:38.736: INFO: Node iruya-worker is running more than one daemon pod
Apr 20 17:46:39.779: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 20 17:46:40.653: INFO: Number of nodes with available pods: 0
Apr 20 17:46:40.653: INFO: Node iruya-worker is running more than one daemon pod
Apr 20 17:46:41.301: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 20 17:46:41.568: INFO: Number of nodes with available pods: 0
Apr 20 17:46:41.568: INFO: Node iruya-worker is running more than one daemon pod
Apr 20 17:46:41.766: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 20 17:46:41.769: INFO: Number of nodes with available pods: 0
Apr 20 17:46:41.770: INFO: Node iruya-worker is running more than one daemon pod
Apr 20 17:46:43.186: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 20 17:46:43.190: INFO: Number of nodes with available pods: 0
Apr 20 17:46:43.190: INFO: Node iruya-worker is running more than one daemon pod
Apr 20 17:46:43.733: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 20 17:46:43.736: INFO: Number of nodes with available pods: 0
Apr 20 17:46:43.736: INFO: Node iruya-worker is running more than one daemon pod
Apr 20 17:46:44.802: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 20 17:46:44.805: INFO: Number of nodes with available pods: 0
Apr 20 17:46:44.805: INFO: Node iruya-worker is running more than one daemon pod
Apr 20 17:46:45.740: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 20 17:46:45.743: INFO: Number of nodes with available pods: 0
Apr 20 17:46:45.743: INFO: Node iruya-worker is running more than one daemon pod
Apr 20 17:46:46.731: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 20 17:46:46.735: INFO: Number of nodes with available pods: 2
Apr 20 17:46:46.735: INFO: Number of running nodes: 2, number of available pods: 2
Apr 20 17:46:46.735: INFO: Update the DaemonSet to trigger a rollout
Apr 20 17:46:46.741: INFO: Updating DaemonSet daemon-set
Apr 20 17:46:59.785: INFO: Roll back the DaemonSet before rollout is complete
Apr 20 17:46:59.792: INFO: Updating DaemonSet daemon-set
Apr 20 17:46:59.792: INFO: Make sure DaemonSet rollback is complete
Apr 20 17:46:59.802: INFO: Wrong image for pod: daemon-set-6pxs5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Apr 20 17:46:59.802: INFO: Pod daemon-set-6pxs5 is not available
Apr 20 17:46:59.808: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 20 17:47:01.047: INFO: Wrong image for pod: daemon-set-6pxs5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Apr 20 17:47:01.047: INFO: Pod daemon-set-6pxs5 is not available
Apr 20 17:47:01.051: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 20 17:47:01.813: INFO: Wrong image for pod: daemon-set-6pxs5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Apr 20 17:47:01.813: INFO: Pod daemon-set-6pxs5 is not available
Apr 20 17:47:01.817: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 20 17:47:02.813: INFO: Pod daemon-set-r2xpc is not available
Apr 20 17:47:02.817: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-362, will wait for the garbage collector to delete the pods
Apr 20 17:47:02.902: INFO: Deleting DaemonSet.extensions daemon-set took: 5.853662ms
Apr 20 17:47:03.202: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.231418ms
Apr 20 17:47:09.505: INFO: Number of nodes with available pods: 0
Apr 20 17:47:09.505: INFO: Number of running nodes: 0, number of available pods: 0
Apr 20 17:47:09.507: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-362/daemonsets","resourceVersion":"1311695"},"items":null}

Apr 20 17:47:09.510: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-362/pods","resourceVersion":"1311695"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:47:09.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-362" for this suite.
Apr 20 17:47:17.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:47:17.683: INFO: namespace daemonsets-362 deletion completed in 8.159841316s

• [SLOW TEST:41.081 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:47:17.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Apr 20 17:47:17.754: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Apr 20 17:47:17.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1836'
Apr 20 17:47:26.855: INFO: stderr: ""
Apr 20 17:47:26.855: INFO: stdout: "service/redis-slave created\n"
Apr 20 17:47:26.855: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Apr 20 17:47:26.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1836'
Apr 20 17:47:27.156: INFO: stderr: ""
Apr 20 17:47:27.156: INFO: stdout: "service/redis-master created\n"
Apr 20 17:47:27.157: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Apr 20 17:47:27.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1836'
Apr 20 17:47:27.497: INFO: stderr: ""
Apr 20 17:47:27.497: INFO: stdout: "service/frontend created\n"
Apr 20 17:47:27.498: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Apr 20 17:47:27.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1836'
Apr 20 17:47:27.801: INFO: stderr: ""
Apr 20 17:47:27.801: INFO: stdout: "deployment.apps/frontend created\n"
Apr 20 17:47:27.801: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Apr 20 17:47:27.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1836'
Apr 20 17:47:28.163: INFO: stderr: ""
Apr 20 17:47:28.163: INFO: stdout: "deployment.apps/redis-master created\n"
Apr 20 17:47:28.163: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Apr 20 17:47:28.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1836'
Apr 20 17:47:28.857: INFO: stderr: ""
Apr 20 17:47:28.857: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Apr 20 17:47:28.857: INFO: Waiting for all frontend pods to be Running.
Apr 20 17:47:43.908: INFO: Waiting for frontend to serve content.
Apr 20 17:47:43.980: INFO: Trying to add a new entry to the guestbook.
Apr 20 17:47:44.073: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Apr 20 17:47:44.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1836'
Apr 20 17:47:44.227: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 20 17:47:44.227: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Apr 20 17:47:44.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1836'
Apr 20 17:47:44.380: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 20 17:47:44.381: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Apr 20 17:47:44.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1836'
Apr 20 17:47:44.517: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 20 17:47:44.517: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Apr 20 17:47:44.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1836'
Apr 20 17:47:44.638: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 20 17:47:44.638: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Apr 20 17:47:44.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1836'
Apr 20 17:47:44.759: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 20 17:47:44.759: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Apr 20 17:47:44.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1836'
Apr 20 17:47:44.883: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 20 17:47:44.883: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:47:44.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1836" for this suite.
Apr 20 17:48:30.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:48:31.414: INFO: namespace kubectl-1836 deletion completed in 46.490133571s

• [SLOW TEST:73.730 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:48:31.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1271.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1271.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1271.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1271.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1271.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1271.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1271.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1271.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1271.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1271.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1271.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 233.230.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.230.233_udp@PTR;check="$$(dig +tcp +noall +answer +search 233.230.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.230.233_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1271.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1271.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1271.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1271.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1271.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1271.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1271.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1271.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1271.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1271.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1271.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 233.230.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.230.233_udp@PTR;check="$$(dig +tcp +noall +answer +search 233.230.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.230.233_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Apr 20 17:48:40.199: INFO: Unable to read wheezy_udp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:40.202: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:40.205: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:40.208: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:40.241: INFO: Unable to read jessie_udp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:40.244: INFO: Unable to read jessie_tcp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:40.247: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:40.250: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:40.267: INFO: Lookups using dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2 failed for: [wheezy_udp@dns-test-service.dns-1271.svc.cluster.local wheezy_tcp@dns-test-service.dns-1271.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local jessie_udp@dns-test-service.dns-1271.svc.cluster.local jessie_tcp@dns-test-service.dns-1271.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local]

Apr 20 17:48:45.283: INFO: Unable to read wheezy_udp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:45.287: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:45.290: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:45.293: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:45.311: INFO: Unable to read jessie_udp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:45.314: INFO: Unable to read jessie_tcp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:45.316: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:45.319: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:45.336: INFO: Lookups using dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2 failed for: [wheezy_udp@dns-test-service.dns-1271.svc.cluster.local wheezy_tcp@dns-test-service.dns-1271.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local jessie_udp@dns-test-service.dns-1271.svc.cluster.local jessie_tcp@dns-test-service.dns-1271.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local]

Apr 20 17:48:50.271: INFO: Unable to read wheezy_udp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:50.275: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:50.279: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:50.282: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:50.304: INFO: Unable to read jessie_udp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:50.307: INFO: Unable to read jessie_tcp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:50.310: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:50.313: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:50.334: INFO: Lookups using dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2 failed for: [wheezy_udp@dns-test-service.dns-1271.svc.cluster.local wheezy_tcp@dns-test-service.dns-1271.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local jessie_udp@dns-test-service.dns-1271.svc.cluster.local jessie_tcp@dns-test-service.dns-1271.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local]

Apr 20 17:48:55.271: INFO: Unable to read wheezy_udp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:55.274: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:55.277: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:55.280: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:55.304: INFO: Unable to read jessie_udp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:55.306: INFO: Unable to read jessie_tcp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:55.308: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:55.311: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:48:55.324: INFO: Lookups using dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2 failed for: [wheezy_udp@dns-test-service.dns-1271.svc.cluster.local wheezy_tcp@dns-test-service.dns-1271.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local jessie_udp@dns-test-service.dns-1271.svc.cluster.local jessie_tcp@dns-test-service.dns-1271.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local]

Apr 20 17:49:00.272: INFO: Unable to read wheezy_udp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:49:00.275: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:49:00.278: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:49:00.280: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:49:00.298: INFO: Unable to read jessie_udp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:49:00.301: INFO: Unable to read jessie_tcp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:49:00.304: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:49:00.307: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:49:00.324: INFO: Lookups using dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2 failed for: [wheezy_udp@dns-test-service.dns-1271.svc.cluster.local wheezy_tcp@dns-test-service.dns-1271.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local jessie_udp@dns-test-service.dns-1271.svc.cluster.local jessie_tcp@dns-test-service.dns-1271.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local]

Apr 20 17:49:05.271: INFO: Unable to read wheezy_udp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:49:05.274: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:49:05.277: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:49:05.280: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:49:05.301: INFO: Unable to read jessie_udp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:49:05.303: INFO: Unable to read jessie_tcp@dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:49:05.306: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:49:05.308: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local from pod dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2: the server could not find the requested resource (get pods dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2)
Apr 20 17:49:05.326: INFO: Lookups using dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2 failed for: [wheezy_udp@dns-test-service.dns-1271.svc.cluster.local wheezy_tcp@dns-test-service.dns-1271.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local jessie_udp@dns-test-service.dns-1271.svc.cluster.local jessie_tcp@dns-test-service.dns-1271.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1271.svc.cluster.local]

Apr 20 17:49:10.331: INFO: DNS probes using dns-1271/dns-test-0dd5c2b3-77fe-472f-b117-370d359a19a2 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:49:11.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1271" for this suite.
Apr 20 17:49:17.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:49:17.274: INFO: namespace dns-1271 deletion completed in 6.148021414s

• [SLOW TEST:45.860 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:49:17.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:49:21.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2896" for this suite.
Apr 20 17:50:01.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:50:01.595: INFO: namespace kubelet-test-2896 deletion completed in 40.15078436s

• [SLOW TEST:44.320 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 20 17:50:01.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Apr 20 17:50:01.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4267'
Apr 20 17:50:01.978: INFO: stderr: ""
Apr 20 17:50:01.978: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Apr 20 17:50:01.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4267'
Apr 20 17:50:02.093: INFO: stderr: ""
Apr 20 17:50:02.093: INFO: stdout: "update-demo-nautilus-gx7k7 update-demo-nautilus-rbst7 "
Apr 20 17:50:02.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gx7k7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4267'
Apr 20 17:50:02.231: INFO: stderr: ""
Apr 20 17:50:02.231: INFO: stdout: ""
Apr 20 17:50:02.231: INFO: update-demo-nautilus-gx7k7 is created but not running
Apr 20 17:50:07.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4267'
Apr 20 17:50:07.892: INFO: stderr: ""
Apr 20 17:50:07.892: INFO: stdout: "update-demo-nautilus-gx7k7 update-demo-nautilus-rbst7 "
Apr 20 17:50:07.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gx7k7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4267'
Apr 20 17:50:08.242: INFO: stderr: ""
Apr 20 17:50:08.242: INFO: stdout: "true"
Apr 20 17:50:08.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gx7k7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4267'
Apr 20 17:50:08.336: INFO: stderr: ""
Apr 20 17:50:08.336: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 20 17:50:08.336: INFO: validating pod update-demo-nautilus-gx7k7
Apr 20 17:50:08.476: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 20 17:50:08.476: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 20 17:50:08.476: INFO: update-demo-nautilus-gx7k7 is verified up and running
Apr 20 17:50:08.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rbst7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4267'
Apr 20 17:50:08.574: INFO: stderr: ""
Apr 20 17:50:08.574: INFO: stdout: ""
Apr 20 17:50:08.574: INFO: update-demo-nautilus-rbst7 is created but not running
Apr 20 17:50:13.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4267'
Apr 20 17:50:13.684: INFO: stderr: ""
Apr 20 17:50:13.684: INFO: stdout: "update-demo-nautilus-gx7k7 update-demo-nautilus-rbst7 "
Apr 20 17:50:13.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gx7k7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4267'
Apr 20 17:50:13.780: INFO: stderr: ""
Apr 20 17:50:13.780: INFO: stdout: "true"
Apr 20 17:50:13.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gx7k7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4267'
Apr 20 17:50:13.877: INFO: stderr: ""
Apr 20 17:50:13.877: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 20 17:50:13.877: INFO: validating pod update-demo-nautilus-gx7k7
Apr 20 17:50:13.881: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 20 17:50:13.881: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 20 17:50:13.881: INFO: update-demo-nautilus-gx7k7 is verified up and running
Apr 20 17:50:13.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rbst7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4267'
Apr 20 17:50:13.971: INFO: stderr: ""
Apr 20 17:50:13.971: INFO: stdout: "true"
Apr 20 17:50:13.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rbst7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4267'
Apr 20 17:50:14.061: INFO: stderr: ""
Apr 20 17:50:14.061: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 20 17:50:14.061: INFO: validating pod update-demo-nautilus-rbst7
Apr 20 17:50:14.081: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 20 17:50:14.081: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 20 17:50:14.081: INFO: update-demo-nautilus-rbst7 is verified up and running
STEP: scaling down the replication controller
Apr 20 17:50:14.084: INFO: scanned /root for discovery docs: 
Apr 20 17:50:14.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4267'
Apr 20 17:50:15.216: INFO: stderr: ""
Apr 20 17:50:15.216: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Apr 20 17:50:15.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4267'
Apr 20 17:50:15.321: INFO: stderr: ""
Apr 20 17:50:15.321: INFO: stdout: "update-demo-nautilus-gx7k7 update-demo-nautilus-rbst7 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Apr 20 17:50:20.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4267'
Apr 20 17:50:20.448: INFO: stderr: ""
Apr 20 17:50:20.448: INFO: stdout: "update-demo-nautilus-gx7k7 "
Apr 20 17:50:20.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gx7k7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4267'
Apr 20 17:50:20.534: INFO: stderr: ""
Apr 20 17:50:20.534: INFO: stdout: "true"
Apr 20 17:50:20.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gx7k7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4267'
Apr 20 17:50:20.627: INFO: stderr: ""
Apr 20 17:50:20.627: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 20 17:50:20.627: INFO: validating pod update-demo-nautilus-gx7k7
Apr 20 17:50:20.629: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 20 17:50:20.630: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 20 17:50:20.630: INFO: update-demo-nautilus-gx7k7 is verified up and running
STEP: scaling up the replication controller
Apr 20 17:50:20.631: INFO: scanned /root for discovery docs: 
Apr 20 17:50:20.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4267'
Apr 20 17:50:21.743: INFO: stderr: ""
Apr 20 17:50:21.743: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Apr 20 17:50:21.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4267'
Apr 20 17:50:21.840: INFO: stderr: ""
Apr 20 17:50:21.840: INFO: stdout: "update-demo-nautilus-dgnbw update-demo-nautilus-gx7k7 "
Apr 20 17:50:21.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dgnbw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4267'
Apr 20 17:50:21.936: INFO: stderr: ""
Apr 20 17:50:21.936: INFO: stdout: ""
Apr 20 17:50:21.936: INFO: update-demo-nautilus-dgnbw is created but not running
Apr 20 17:50:26.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4267'
Apr 20 17:50:27.033: INFO: stderr: ""
Apr 20 17:50:27.033: INFO: stdout: "update-demo-nautilus-dgnbw update-demo-nautilus-gx7k7 "
Apr 20 17:50:27.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dgnbw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4267'
Apr 20 17:50:27.127: INFO: stderr: ""
Apr 20 17:50:27.127: INFO: stdout: "true"
Apr 20 17:50:27.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dgnbw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4267'
Apr 20 17:50:27.216: INFO: stderr: ""
Apr 20 17:50:27.216: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 20 17:50:27.216: INFO: validating pod update-demo-nautilus-dgnbw
Apr 20 17:50:27.220: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 20 17:50:27.220: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 20 17:50:27.220: INFO: update-demo-nautilus-dgnbw is verified up and running
Apr 20 17:50:27.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gx7k7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4267'
Apr 20 17:50:27.319: INFO: stderr: ""
Apr 20 17:50:27.319: INFO: stdout: "true"
Apr 20 17:50:27.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gx7k7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4267'
Apr 20 17:50:27.415: INFO: stderr: ""
Apr 20 17:50:27.415: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 20 17:50:27.415: INFO: validating pod update-demo-nautilus-gx7k7
Apr 20 17:50:27.419: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 20 17:50:27.419: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 20 17:50:27.419: INFO: update-demo-nautilus-gx7k7 is verified up and running
STEP: using delete to clean up resources
Apr 20 17:50:27.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4267'
Apr 20 17:50:27.522: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 20 17:50:27.522: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Apr 20 17:50:27.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4267'
Apr 20 17:50:27.658: INFO: stderr: "No resources found.\n"
Apr 20 17:50:27.658: INFO: stdout: ""
Apr 20 17:50:27.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4267 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Apr 20 17:50:27.753: INFO: stderr: ""
Apr 20 17:50:27.753: INFO: stdout: "update-demo-nautilus-dgnbw\nupdate-demo-nautilus-gx7k7\n"
Apr 20 17:50:28.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4267'
Apr 20 17:50:28.467: INFO: stderr: "No resources found.\n"
Apr 20 17:50:28.467: INFO: stdout: ""
Apr 20 17:50:28.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4267 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Apr 20 17:50:28.578: INFO: stderr: ""
Apr 20 17:50:28.578: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 20 17:50:28.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4267" for this suite.
Apr 20 17:50:50.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 20 17:50:50.683: INFO: namespace kubectl-4267 deletion completed in 22.102325195s

• [SLOW TEST:49.089 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSApr 20 17:50:50.684: INFO: Running AfterSuite actions on all nodes
Apr 20 17:50:50.684: INFO: Running AfterSuite actions on node 1
Apr 20 17:50:50.684: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 7621.842 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS