I0516 12:55:44.150028 6 e2e.go:243] Starting e2e run "0f830618-8eb8-4d0c-9f82-8cd8bc6973fb" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589633743 - Will randomize all specs Will run 215 of 4412 specs May 16 12:55:44.346: INFO: >>> kubeConfig: /root/.kube/config May 16 12:55:44.350: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 16 12:55:44.373: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 16 12:55:44.407: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 16 12:55:44.407: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 16 12:55:44.407: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 16 12:55:44.416: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 16 12:55:44.416: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 16 12:55:44.416: INFO: e2e test version: v1.15.11 May 16 12:55:44.418: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 12:55:44.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets May 16 12:55:44.481: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-b15e56af-f1de-4120-8cf8-ce7f5722af26 STEP: Creating a pod to test consume secrets May 16 12:55:44.492: INFO: Waiting up to 5m0s for pod "pod-secrets-83dcf805-e6b9-409b-b073-47fbb027ad58" in namespace "secrets-1567" to be "success or failure" May 16 12:55:44.494: INFO: Pod "pod-secrets-83dcf805-e6b9-409b-b073-47fbb027ad58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.621741ms May 16 12:55:46.499: INFO: Pod "pod-secrets-83dcf805-e6b9-409b-b073-47fbb027ad58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007113601s May 16 12:55:48.503: INFO: Pod "pod-secrets-83dcf805-e6b9-409b-b073-47fbb027ad58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011306113s STEP: Saw pod success May 16 12:55:48.503: INFO: Pod "pod-secrets-83dcf805-e6b9-409b-b073-47fbb027ad58" satisfied condition "success or failure" May 16 12:55:48.506: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-83dcf805-e6b9-409b-b073-47fbb027ad58 container secret-volume-test: STEP: delete the pod May 16 12:55:48.531: INFO: Waiting for pod pod-secrets-83dcf805-e6b9-409b-b073-47fbb027ad58 to disappear May 16 12:55:48.547: INFO: Pod pod-secrets-83dcf805-e6b9-409b-b073-47fbb027ad58 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 12:55:48.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1567" for this suite. May 16 12:55:54.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 12:55:54.641: INFO: namespace secrets-1567 deletion completed in 6.090197664s • [SLOW TEST:10.223 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 12:55:54.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 12:55:58.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6803" for this suite. May 16 12:56:38.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 12:56:38.886: INFO: namespace kubelet-test-6803 deletion completed in 40.097870285s • [SLOW TEST:44.244 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 12:56:38.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 12:56:43.041: INFO: Waiting up to 5m0s for pod "client-envvars-1263383d-d029-4d58-920c-4d5f203c6fcc" in namespace "pods-8336" to be "success or failure" May 16 12:56:43.137: INFO: Pod "client-envvars-1263383d-d029-4d58-920c-4d5f203c6fcc": Phase="Pending", Reason="", readiness=false. Elapsed: 95.658541ms May 16 12:56:45.140: INFO: Pod "client-envvars-1263383d-d029-4d58-920c-4d5f203c6fcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099277252s May 16 12:56:47.145: INFO: Pod "client-envvars-1263383d-d029-4d58-920c-4d5f203c6fcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103778807s STEP: Saw pod success May 16 12:56:47.145: INFO: Pod "client-envvars-1263383d-d029-4d58-920c-4d5f203c6fcc" satisfied condition "success or failure" May 16 12:56:47.147: INFO: Trying to get logs from node iruya-worker pod client-envvars-1263383d-d029-4d58-920c-4d5f203c6fcc container env3cont: STEP: delete the pod May 16 12:56:47.217: INFO: Waiting for pod client-envvars-1263383d-d029-4d58-920c-4d5f203c6fcc to disappear May 16 12:56:47.280: INFO: Pod client-envvars-1263383d-d029-4d58-920c-4d5f203c6fcc no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 12:56:47.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8336" for this suite. May 16 12:57:37.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 12:57:37.468: INFO: namespace pods-8336 deletion completed in 50.183125222s • [SLOW TEST:58.582 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 12:57:37.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 16 12:57:37.558: INFO: Waiting up to 5m0s for pod "downward-api-4173d922-f20c-4995-b0d4-02b44d28281d" in namespace "downward-api-5305" to be "success or failure" May 16 12:57:37.593: INFO: Pod "downward-api-4173d922-f20c-4995-b0d4-02b44d28281d": Phase="Pending", Reason="", readiness=false. Elapsed: 35.0588ms May 16 12:57:39.597: INFO: Pod "downward-api-4173d922-f20c-4995-b0d4-02b44d28281d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039073719s May 16 12:57:41.602: INFO: Pod "downward-api-4173d922-f20c-4995-b0d4-02b44d28281d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043587326s STEP: Saw pod success May 16 12:57:41.602: INFO: Pod "downward-api-4173d922-f20c-4995-b0d4-02b44d28281d" satisfied condition "success or failure" May 16 12:57:41.605: INFO: Trying to get logs from node iruya-worker2 pod downward-api-4173d922-f20c-4995-b0d4-02b44d28281d container dapi-container: STEP: delete the pod May 16 12:57:41.769: INFO: Waiting for pod downward-api-4173d922-f20c-4995-b0d4-02b44d28281d to disappear May 16 12:57:41.880: INFO: Pod downward-api-4173d922-f20c-4995-b0d4-02b44d28281d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 12:57:41.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5305" for this suite. May 16 12:57:47.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 12:57:48.009: INFO: namespace downward-api-5305 deletion completed in 6.111846688s • [SLOW TEST:10.540 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 12:57:48.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-4413 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4413 STEP: Deleting pre-stop pod May 16 12:58:01.224: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 12:58:01.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4413" for this suite. May 16 12:58:39.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 12:58:39.326: INFO: namespace prestop-4413 deletion completed in 38.0921523s • [SLOW TEST:51.316 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 12:58:39.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 12:58:39.425: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 16 12:58:44.430: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 16 12:58:44.430: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 16 12:58:44.450: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-5882,SelfLink:/apis/apps/v1/namespaces/deployment-5882/deployments/test-cleanup-deployment,UID:431a6883-37ad-40af-80ee-0f0a647c7e95,ResourceVersion:11210443,Generation:1,CreationTimestamp:2020-05-16 12:58:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 16 12:58:44.474: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-5882,SelfLink:/apis/apps/v1/namespaces/deployment-5882/replicasets/test-cleanup-deployment-55bbcbc84c,UID:3588fde8-a378-4e26-87fc-45e68fb08c61,ResourceVersion:11210445,Generation:1,CreationTimestamp:2020-05-16 12:58:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 431a6883-37ad-40af-80ee-0f0a647c7e95 0xc00216f3b7 0xc00216f3b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 16 12:58:44.474: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 16 12:58:44.474: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-5882,SelfLink:/apis/apps/v1/namespaces/deployment-5882/replicasets/test-cleanup-controller,UID:ec4aef69-feaa-400d-8c0b-24d21a4e73bb,ResourceVersion:11210444,Generation:1,CreationTimestamp:2020-05-16 12:58:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 431a6883-37ad-40af-80ee-0f0a647c7e95 0xc00216f2e7 0xc00216f2e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 16 12:58:44.494: INFO: Pod "test-cleanup-controller-ldj8s" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-ldj8s,GenerateName:test-cleanup-controller-,Namespace:deployment-5882,SelfLink:/api/v1/namespaces/deployment-5882/pods/test-cleanup-controller-ldj8s,UID:f2f87270-9fa9-4a2a-bf79-17cd1988b043,ResourceVersion:11210438,Generation:0,CreationTimestamp:2020-05-16 12:58:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller ec4aef69-feaa-400d-8c0b-24d21a4e73bb 0xc00216fc87 0xc00216fc88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-d4qlt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-d4qlt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-d4qlt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00216fd00} {node.kubernetes.io/unreachable Exists NoExecute 0xc00216fd30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 12:58:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 12:58:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 12:58:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 12:58:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.194,StartTime:2020-05-16 12:58:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-16 12:58:41 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f2159feeb441ed975a48e1fa3dcb473dac23df693e38681a99b286e13fa978e5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 12:58:44.495: INFO: Pod "test-cleanup-deployment-55bbcbc84c-cnngw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-cnngw,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-5882,SelfLink:/api/v1/namespaces/deployment-5882/pods/test-cleanup-deployment-55bbcbc84c-cnngw,UID:c8c707fd-a97e-47a4-b2ac-3430cc438220,ResourceVersion:11210446,Generation:0,CreationTimestamp:2020-05-16 12:58:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 3588fde8-a378-4e26-87fc-45e68fb08c61 0xc00216fe17 0xc00216fe18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-d4qlt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-d4qlt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-d4qlt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00216fe80} {node.kubernetes.io/unreachable Exists NoExecute 0xc00216fea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 12:58:44.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5882" for this suite. May 16 12:58:50.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 12:58:50.681: INFO: namespace deployment-5882 deletion completed in 6.128550867s • [SLOW TEST:11.355 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 12:58:50.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 16 12:58:54.771: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 12:58:54.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7589" for this suite. May 16 12:59:00.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 12:59:00.916: INFO: namespace container-runtime-7589 deletion completed in 6.101850162s • [SLOW TEST:10.235 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 12:59:00.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 16 12:59:05.058: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 12:59:05.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8866" for this suite. May 16 12:59:11.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 12:59:11.235: INFO: namespace container-runtime-8866 deletion completed in 6.084186403s • [SLOW TEST:10.317 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 12:59:11.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 12:59:11.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1500" for this suite. May 16 12:59:17.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 12:59:17.442: INFO: namespace kubelet-test-1500 deletion completed in 6.079436497s • [SLOW TEST:6.207 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 12:59:17.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-f95c05d3-ba64-45d9-8a36-9bd59069e2ae STEP: Creating a pod to test consume secrets May 16 12:59:17.520: INFO: Waiting up to 5m0s for pod "pod-secrets-1061fc5c-3cac-4687-9ee8-f2ec74eb676c" in namespace "secrets-663" to be "success or failure" May 16 12:59:17.557: INFO: Pod "pod-secrets-1061fc5c-3cac-4687-9ee8-f2ec74eb676c": Phase="Pending", Reason="", readiness=false. Elapsed: 36.718576ms May 16 12:59:19.561: INFO: Pod "pod-secrets-1061fc5c-3cac-4687-9ee8-f2ec74eb676c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040947109s May 16 12:59:21.566: INFO: Pod "pod-secrets-1061fc5c-3cac-4687-9ee8-f2ec74eb676c": Phase="Running", Reason="", readiness=true. Elapsed: 4.045547413s May 16 12:59:23.570: INFO: Pod "pod-secrets-1061fc5c-3cac-4687-9ee8-f2ec74eb676c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049933702s STEP: Saw pod success May 16 12:59:23.570: INFO: Pod "pod-secrets-1061fc5c-3cac-4687-9ee8-f2ec74eb676c" satisfied condition "success or failure" May 16 12:59:23.574: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-1061fc5c-3cac-4687-9ee8-f2ec74eb676c container secret-volume-test: STEP: delete the pod May 16 12:59:23.595: INFO: Waiting for pod pod-secrets-1061fc5c-3cac-4687-9ee8-f2ec74eb676c to disappear May 16 12:59:23.606: INFO: Pod pod-secrets-1061fc5c-3cac-4687-9ee8-f2ec74eb676c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 12:59:23.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-663" for this suite. May 16 12:59:29.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 12:59:29.703: INFO: namespace secrets-663 deletion completed in 6.094196897s • [SLOW TEST:12.261 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 12:59:29.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 12:59:29.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2984" for this suite. May 16 12:59:35.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 12:59:35.865: INFO: namespace services-2984 deletion completed in 6.088011023s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.161 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 12:59:35.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 12:59:35.950: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 12:59:40.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8568" for this suite. May 16 13:00:20.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:00:20.186: INFO: namespace pods-8568 deletion completed in 40.086412473s • [SLOW TEST:44.320 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:00:20.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-17cd1a3d-d7a3-466c-bf3c-c270bc836d55 STEP: Creating a pod to test consume secrets May 16 13:00:20.280: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9da2fdf4-d1d9-46a1-8d08-f762acc94354" in namespace "projected-6200" to be "success or failure" May 16 13:00:20.297: INFO: Pod "pod-projected-secrets-9da2fdf4-d1d9-46a1-8d08-f762acc94354": Phase="Pending", Reason="", readiness=false. Elapsed: 16.864914ms May 16 13:00:22.326: INFO: Pod "pod-projected-secrets-9da2fdf4-d1d9-46a1-8d08-f762acc94354": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045988656s May 16 13:00:24.330: INFO: Pod "pod-projected-secrets-9da2fdf4-d1d9-46a1-8d08-f762acc94354": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050526511s STEP: Saw pod success May 16 13:00:24.330: INFO: Pod "pod-projected-secrets-9da2fdf4-d1d9-46a1-8d08-f762acc94354" satisfied condition "success or failure" May 16 13:00:24.334: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-9da2fdf4-d1d9-46a1-8d08-f762acc94354 container projected-secret-volume-test: STEP: delete the pod May 16 13:00:24.368: INFO: Waiting for pod pod-projected-secrets-9da2fdf4-d1d9-46a1-8d08-f762acc94354 to disappear May 16 13:00:24.379: INFO: Pod pod-projected-secrets-9da2fdf4-d1d9-46a1-8d08-f762acc94354 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:00:24.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6200" for this suite. May 16 13:00:30.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:00:30.475: INFO: namespace projected-6200 deletion completed in 6.093087223s • [SLOW TEST:10.289 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:00:30.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 16 13:00:30.526: INFO: Waiting up to 5m0s for pod "pod-c66757be-1748-44c6-9192-c4749a7c68b8" in namespace "emptydir-5642" to be "success or failure" May 16 13:00:30.529: INFO: Pod "pod-c66757be-1748-44c6-9192-c4749a7c68b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.6864ms May 16 13:00:32.559: INFO: Pod "pod-c66757be-1748-44c6-9192-c4749a7c68b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033042805s May 16 13:00:34.563: INFO: Pod "pod-c66757be-1748-44c6-9192-c4749a7c68b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036678659s STEP: Saw pod success May 16 13:00:34.563: INFO: Pod "pod-c66757be-1748-44c6-9192-c4749a7c68b8" satisfied condition "success or failure" May 16 13:00:34.566: INFO: Trying to get logs from node iruya-worker pod pod-c66757be-1748-44c6-9192-c4749a7c68b8 container test-container: STEP: delete the pod May 16 13:00:34.639: INFO: Waiting for pod pod-c66757be-1748-44c6-9192-c4749a7c68b8 to disappear May 16 13:00:34.654: INFO: Pod pod-c66757be-1748-44c6-9192-c4749a7c68b8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:00:34.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5642" for this suite. May 16 13:00:40.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:00:40.739: INFO: namespace emptydir-5642 deletion completed in 6.081820062s • [SLOW TEST:10.263 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:00:40.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 16 13:00:40.822: INFO: Waiting up to 5m0s for pod "downwardapi-volume-14410663-7710-4a34-8886-046db515e06c" in namespace "projected-1834" to be "success or failure" May 16 13:00:40.828: INFO: Pod "downwardapi-volume-14410663-7710-4a34-8886-046db515e06c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.777852ms May 16 13:00:42.834: INFO: Pod "downwardapi-volume-14410663-7710-4a34-8886-046db515e06c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011379829s May 16 13:00:44.856: INFO: Pod "downwardapi-volume-14410663-7710-4a34-8886-046db515e06c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033452519s STEP: Saw pod success May 16 13:00:44.856: INFO: Pod "downwardapi-volume-14410663-7710-4a34-8886-046db515e06c" satisfied condition "success or failure" May 16 13:00:44.858: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-14410663-7710-4a34-8886-046db515e06c container client-container: STEP: delete the pod May 16 13:00:44.877: INFO: Waiting for pod downwardapi-volume-14410663-7710-4a34-8886-046db515e06c to disappear May 16 13:00:44.882: INFO: Pod downwardapi-volume-14410663-7710-4a34-8886-046db515e06c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:00:44.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1834" for this suite. May 16 13:00:50.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:00:50.975: INFO: namespace projected-1834 deletion completed in 6.090876617s • [SLOW TEST:10.236 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:00:50.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 16 13:00:51.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2130' May 16 13:00:53.725: INFO: stderr: "" May 16 13:00:53.725: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 16 13:00:54.729: INFO: Selector matched 1 pods for map[app:redis] May 16 13:00:54.729: INFO: Found 0 / 1 May 16 13:00:55.730: INFO: Selector matched 1 pods for map[app:redis] May 16 13:00:55.730: INFO: Found 0 / 1 May 16 13:00:56.730: INFO: Selector matched 1 pods for map[app:redis] May 16 13:00:56.730: INFO: Found 0 / 1 May 16 13:00:57.730: INFO: Selector matched 1 pods for map[app:redis] May 16 13:00:57.730: INFO: Found 1 / 1 May 16 13:00:57.730: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 16 13:00:57.733: INFO: Selector matched 1 pods for map[app:redis] May 16 13:00:57.733: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 16 13:00:57.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-rd7t6 --namespace=kubectl-2130 -p {"metadata":{"annotations":{"x":"y"}}}' May 16 13:00:57.830: INFO: stderr: "" May 16 13:00:57.830: INFO: stdout: "pod/redis-master-rd7t6 patched\n" STEP: checking annotations May 16 13:00:57.835: INFO: Selector matched 1 pods for map[app:redis] May 16 13:00:57.835: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:00:57.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2130" for this suite. May 16 13:01:19.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:01:19.950: INFO: namespace kubectl-2130 deletion completed in 22.110002289s • [SLOW TEST:28.974 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:01:19.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 16 13:01:20.018: INFO: Waiting up to 5m0s for pod "downward-api-d921dcf8-f157-4be1-98e2-e5acea1183ec" in namespace "downward-api-9664" to be "success or failure" May 16 13:01:20.044: INFO: Pod "downward-api-d921dcf8-f157-4be1-98e2-e5acea1183ec": Phase="Pending", Reason="", readiness=false. Elapsed: 25.42793ms May 16 13:01:22.048: INFO: Pod "downward-api-d921dcf8-f157-4be1-98e2-e5acea1183ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029320188s May 16 13:01:24.052: INFO: Pod "downward-api-d921dcf8-f157-4be1-98e2-e5acea1183ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033139959s STEP: Saw pod success May 16 13:01:24.052: INFO: Pod "downward-api-d921dcf8-f157-4be1-98e2-e5acea1183ec" satisfied condition "success or failure" May 16 13:01:24.054: INFO: Trying to get logs from node iruya-worker2 pod downward-api-d921dcf8-f157-4be1-98e2-e5acea1183ec container dapi-container: STEP: delete the pod May 16 13:01:24.221: INFO: Waiting for pod downward-api-d921dcf8-f157-4be1-98e2-e5acea1183ec to disappear May 16 13:01:24.320: INFO: Pod downward-api-d921dcf8-f157-4be1-98e2-e5acea1183ec no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:01:24.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9664" for this suite. May 16 13:01:30.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:01:30.422: INFO: namespace downward-api-9664 deletion completed in 6.097617605s • [SLOW TEST:10.472 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:01:30.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-9d82071c-87c8-4673-a3b1-20edb644debb STEP: Creating a pod to test consume configMaps May 16 13:01:30.837: INFO: Waiting up to 5m0s for pod "pod-configmaps-0b5b3ee6-fc9e-4f1b-ba8e-045709be0d87" in namespace "configmap-6496" to be "success or failure" May 16 13:01:30.896: INFO: Pod "pod-configmaps-0b5b3ee6-fc9e-4f1b-ba8e-045709be0d87": Phase="Pending", Reason="", readiness=false. Elapsed: 58.870511ms May 16 13:01:32.968: INFO: Pod "pod-configmaps-0b5b3ee6-fc9e-4f1b-ba8e-045709be0d87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131156724s May 16 13:01:34.972: INFO: Pod "pod-configmaps-0b5b3ee6-fc9e-4f1b-ba8e-045709be0d87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.135092322s STEP: Saw pod success May 16 13:01:34.972: INFO: Pod "pod-configmaps-0b5b3ee6-fc9e-4f1b-ba8e-045709be0d87" satisfied condition "success or failure" May 16 13:01:34.975: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-0b5b3ee6-fc9e-4f1b-ba8e-045709be0d87 container configmap-volume-test: STEP: delete the pod May 16 13:01:35.035: INFO: Waiting for pod pod-configmaps-0b5b3ee6-fc9e-4f1b-ba8e-045709be0d87 to disappear May 16 13:01:35.063: INFO: Pod pod-configmaps-0b5b3ee6-fc9e-4f1b-ba8e-045709be0d87 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:01:35.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6496" for this suite. May 16 13:01:41.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:01:41.175: INFO: namespace configmap-6496 deletion completed in 6.107763563s • [SLOW TEST:10.752 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:01:41.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 16 13:01:47.826: INFO: Successfully updated pod "annotationupdate507f0a64-ecb5-40f1-b37d-12580ca3738e" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:01:49.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2600" for this suite. May 16 13:02:11.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:02:11.945: INFO: namespace downward-api-2600 deletion completed in 22.091859716s • [SLOW TEST:30.770 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:02:11.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 16 13:02:12.024: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3698c03-dcb3-42a4-9d74-f2998536ecfc" in namespace "projected-9948" to be "success or failure" May 16 13:02:12.027: INFO: Pod "downwardapi-volume-d3698c03-dcb3-42a4-9d74-f2998536ecfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.7539ms May 16 13:02:14.031: INFO: Pod "downwardapi-volume-d3698c03-dcb3-42a4-9d74-f2998536ecfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006656468s May 16 13:02:16.036: INFO: Pod "downwardapi-volume-d3698c03-dcb3-42a4-9d74-f2998536ecfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011208762s STEP: Saw pod success May 16 13:02:16.036: INFO: Pod "downwardapi-volume-d3698c03-dcb3-42a4-9d74-f2998536ecfc" satisfied condition "success or failure" May 16 13:02:16.039: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d3698c03-dcb3-42a4-9d74-f2998536ecfc container client-container: STEP: delete the pod May 16 13:02:16.095: INFO: Waiting for pod downwardapi-volume-d3698c03-dcb3-42a4-9d74-f2998536ecfc to disappear May 16 13:02:16.105: INFO: Pod downwardapi-volume-d3698c03-dcb3-42a4-9d74-f2998536ecfc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:02:16.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9948" for this suite. May 16 13:02:22.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:02:22.188: INFO: namespace projected-9948 deletion completed in 6.079040506s • [SLOW TEST:10.243 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:02:22.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 13:02:22.262: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.493201ms) May 16 13:02:22.265: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.964025ms) May 16 13:02:22.268: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.827934ms) May 16 13:02:22.271: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.899243ms) May 16 13:02:22.274: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.599747ms) May 16 13:02:22.276: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.41002ms) May 16 13:02:22.279: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.569472ms) May 16 13:02:22.281: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.426906ms) May 16 13:02:22.283: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.34663ms) May 16 13:02:22.286: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.530757ms) May 16 13:02:22.288: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.424853ms) May 16 13:02:22.292: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.006149ms) May 16 13:02:22.295: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.102835ms) May 16 13:02:22.304: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 8.969196ms) May 16 13:02:22.307: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.052684ms) May 16 13:02:22.309: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.691804ms) May 16 13:02:22.312: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.812482ms) May 16 13:02:22.316: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.207416ms) May 16 13:02:22.318: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.800175ms) May 16 13:02:22.321: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.634893ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:02:22.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5651" for this suite. May 16 13:02:28.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:02:28.447: INFO: namespace proxy-5651 deletion completed in 6.122559376s • [SLOW TEST:6.259 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:02:28.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 16 13:02:36.593: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 13:02:36.788: INFO: Pod pod-with-prestop-exec-hook still exists May 16 13:02:38.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 13:02:38.792: INFO: Pod pod-with-prestop-exec-hook still exists May 16 13:02:40.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 13:02:40.792: INFO: Pod pod-with-prestop-exec-hook still exists May 16 13:02:42.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 13:02:42.792: INFO: Pod pod-with-prestop-exec-hook still exists May 16 13:02:44.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 13:02:44.792: INFO: Pod pod-with-prestop-exec-hook still exists May 16 13:02:46.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 13:02:46.792: INFO: Pod pod-with-prestop-exec-hook still exists May 16 13:02:48.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 13:02:48.792: INFO: Pod pod-with-prestop-exec-hook still exists May 16 13:02:50.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 13:02:50.792: INFO: Pod pod-with-prestop-exec-hook still exists May 16 13:02:52.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 13:02:52.794: INFO: Pod pod-with-prestop-exec-hook still exists May 16 13:02:54.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 13:02:54.792: INFO: Pod pod-with-prestop-exec-hook still exists May 16 13:02:56.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 13:02:56.792: INFO: Pod pod-with-prestop-exec-hook still exists May 16 13:02:58.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 13:02:58.792: INFO: Pod pod-with-prestop-exec-hook still exists May 16 13:03:00.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 13:03:00.792: INFO: Pod pod-with-prestop-exec-hook still exists May 16 13:03:02.788: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 13:03:02.792: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:03:02.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6780" for this suite. May 16 13:03:24.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:03:24.900: INFO: namespace container-lifecycle-hook-6780 deletion completed in 22.096948999s • [SLOW TEST:56.453 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:03:24.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 13:03:48.994: INFO: Container started at 2020-05-16 13:03:28 +0000 UTC, pod became ready at 2020-05-16 13:03:48 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:03:48.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9960" for this suite. May 16 13:04:11.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:04:11.086: INFO: namespace container-probe-9960 deletion completed in 22.088763791s • [SLOW TEST:46.186 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:04:11.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-593d7f4b-4d69-4f29-9f62-ef33cb631aec in namespace container-probe-5431 May 16 13:04:15.183: INFO: Started pod busybox-593d7f4b-4d69-4f29-9f62-ef33cb631aec in namespace container-probe-5431 STEP: checking the pod's current state and verifying that restartCount is present May 16 13:04:15.187: INFO: Initial restart count of pod busybox-593d7f4b-4d69-4f29-9f62-ef33cb631aec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:08:15.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5431" for this suite. May 16 13:08:22.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:08:22.078: INFO: namespace container-probe-5431 deletion completed in 6.131460745s • [SLOW TEST:250.992 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:08:22.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 16 13:08:22.113: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aab495cc-5e52-4689-a663-484debb313bc" in namespace "projected-8088" to be "success or failure" May 16 13:08:22.144: INFO: Pod "downwardapi-volume-aab495cc-5e52-4689-a663-484debb313bc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.986026ms May 16 13:08:24.148: INFO: Pod "downwardapi-volume-aab495cc-5e52-4689-a663-484debb313bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035389399s May 16 13:08:26.165: INFO: Pod "downwardapi-volume-aab495cc-5e52-4689-a663-484debb313bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05201956s STEP: Saw pod success May 16 13:08:26.165: INFO: Pod "downwardapi-volume-aab495cc-5e52-4689-a663-484debb313bc" satisfied condition "success or failure" May 16 13:08:26.168: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-aab495cc-5e52-4689-a663-484debb313bc container client-container: STEP: delete the pod May 16 13:08:26.183: INFO: Waiting for pod downwardapi-volume-aab495cc-5e52-4689-a663-484debb313bc to disappear May 16 13:08:26.187: INFO: Pod downwardapi-volume-aab495cc-5e52-4689-a663-484debb313bc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:08:26.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8088" for this suite. May 16 13:08:32.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:08:32.283: INFO: namespace projected-8088 deletion completed in 6.092583539s • [SLOW TEST:10.204 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:08:32.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1548, will wait for the garbage collector to delete the pods May 16 13:08:38.423: INFO: Deleting Job.batch foo took: 7.367445ms May 16 13:08:38.724: INFO: Terminating Job.batch foo pods took: 300.267018ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:09:22.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1548" for this suite. May 16 13:09:28.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:09:28.321: INFO: namespace job-1548 deletion completed in 6.086302923s • [SLOW TEST:56.038 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:09:28.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command May 16 13:09:28.436: INFO: Waiting up to 5m0s for pod "client-containers-5d0bc0d6-195a-4211-9655-2921f5ddc01a" in namespace "containers-2523" to be "success or failure" May 16 13:09:28.480: INFO: Pod "client-containers-5d0bc0d6-195a-4211-9655-2921f5ddc01a": Phase="Pending", Reason="", readiness=false. Elapsed: 43.689205ms May 16 13:09:30.484: INFO: Pod "client-containers-5d0bc0d6-195a-4211-9655-2921f5ddc01a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047768427s May 16 13:09:32.488: INFO: Pod "client-containers-5d0bc0d6-195a-4211-9655-2921f5ddc01a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05169415s STEP: Saw pod success May 16 13:09:32.488: INFO: Pod "client-containers-5d0bc0d6-195a-4211-9655-2921f5ddc01a" satisfied condition "success or failure" May 16 13:09:32.491: INFO: Trying to get logs from node iruya-worker2 pod client-containers-5d0bc0d6-195a-4211-9655-2921f5ddc01a container test-container: STEP: delete the pod May 16 13:09:32.514: INFO: Waiting for pod client-containers-5d0bc0d6-195a-4211-9655-2921f5ddc01a to disappear May 16 13:09:32.546: INFO: Pod client-containers-5d0bc0d6-195a-4211-9655-2921f5ddc01a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:09:32.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2523" for this suite. May 16 13:09:38.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:09:38.643: INFO: namespace containers-2523 deletion completed in 6.094067497s • [SLOW TEST:10.322 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:09:38.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-33a44eec-7e2e-46ce-823c-6b5d97b539a9 in namespace container-probe-1589 May 16 13:09:42.748: INFO: Started pod test-webserver-33a44eec-7e2e-46ce-823c-6b5d97b539a9 in namespace container-probe-1589 STEP: checking the pod's current state and verifying that restartCount is present May 16 13:09:42.751: INFO: Initial restart count of pod test-webserver-33a44eec-7e2e-46ce-823c-6b5d97b539a9 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:13:43.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1589" for this suite. May 16 13:13:49.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:13:49.436: INFO: namespace container-probe-1589 deletion completed in 6.087107604s • [SLOW TEST:250.792 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:13:49.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults May 16 13:13:49.507: INFO: Waiting up to 5m0s for pod "client-containers-c18d5908-d954-4e93-a888-510463a54704" in namespace "containers-2602" to be "success or failure" May 16 13:13:49.509: INFO: Pod "client-containers-c18d5908-d954-4e93-a888-510463a54704": Phase="Pending", Reason="", readiness=false. Elapsed: 2.441265ms May 16 13:13:51.513: INFO: Pod "client-containers-c18d5908-d954-4e93-a888-510463a54704": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006044484s May 16 13:13:53.517: INFO: Pod "client-containers-c18d5908-d954-4e93-a888-510463a54704": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010200753s STEP: Saw pod success May 16 13:13:53.517: INFO: Pod "client-containers-c18d5908-d954-4e93-a888-510463a54704" satisfied condition "success or failure" May 16 13:13:53.520: INFO: Trying to get logs from node iruya-worker pod client-containers-c18d5908-d954-4e93-a888-510463a54704 container test-container: STEP: delete the pod May 16 13:13:53.540: INFO: Waiting for pod client-containers-c18d5908-d954-4e93-a888-510463a54704 to disappear May 16 13:13:53.546: INFO: Pod client-containers-c18d5908-d954-4e93-a888-510463a54704 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:13:53.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2602" for this suite. May 16 13:13:59.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:13:59.623: INFO: namespace containers-2602 deletion completed in 6.074195787s • [SLOW TEST:10.186 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:13:59.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 16 13:13:59.736: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3923e3ba-d2e7-41ca-a3f2-c05fd1b3d62a" in namespace "downward-api-3008" to be "success or failure" May 16 13:13:59.743: INFO: Pod "downwardapi-volume-3923e3ba-d2e7-41ca-a3f2-c05fd1b3d62a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.530894ms May 16 13:14:01.747: INFO: Pod "downwardapi-volume-3923e3ba-d2e7-41ca-a3f2-c05fd1b3d62a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010882595s May 16 13:14:03.752: INFO: Pod "downwardapi-volume-3923e3ba-d2e7-41ca-a3f2-c05fd1b3d62a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015493989s STEP: Saw pod success May 16 13:14:03.752: INFO: Pod "downwardapi-volume-3923e3ba-d2e7-41ca-a3f2-c05fd1b3d62a" satisfied condition "success or failure" May 16 13:14:03.755: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-3923e3ba-d2e7-41ca-a3f2-c05fd1b3d62a container client-container: STEP: delete the pod May 16 13:14:03.781: INFO: Waiting for pod downwardapi-volume-3923e3ba-d2e7-41ca-a3f2-c05fd1b3d62a to disappear May 16 13:14:03.785: INFO: Pod downwardapi-volume-3923e3ba-d2e7-41ca-a3f2-c05fd1b3d62a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:14:03.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3008" for this suite. May 16 13:14:09.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:14:09.884: INFO: namespace downward-api-3008 deletion completed in 6.09550664s • [SLOW TEST:10.260 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:14:09.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-f7026d53-6525-4d72-b845-158fa44f2eb6 STEP: Creating a pod to test consume configMaps May 16 13:14:09.985: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-925bf44a-2654-4547-ad4e-36e371323b35" in namespace "projected-1702" to be "success or failure" May 16 13:14:09.988: INFO: Pod "pod-projected-configmaps-925bf44a-2654-4547-ad4e-36e371323b35": Phase="Pending", Reason="", readiness=false. Elapsed: 3.060637ms May 16 13:14:11.993: INFO: Pod "pod-projected-configmaps-925bf44a-2654-4547-ad4e-36e371323b35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007750083s May 16 13:14:13.998: INFO: Pod "pod-projected-configmaps-925bf44a-2654-4547-ad4e-36e371323b35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012336759s STEP: Saw pod success May 16 13:14:13.998: INFO: Pod "pod-projected-configmaps-925bf44a-2654-4547-ad4e-36e371323b35" satisfied condition "success or failure" May 16 13:14:14.001: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-925bf44a-2654-4547-ad4e-36e371323b35 container projected-configmap-volume-test: STEP: delete the pod May 16 13:14:14.122: INFO: Waiting for pod pod-projected-configmaps-925bf44a-2654-4547-ad4e-36e371323b35 to disappear May 16 13:14:14.138: INFO: Pod pod-projected-configmaps-925bf44a-2654-4547-ad4e-36e371323b35 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:14:14.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1702" for this suite. May 16 13:14:20.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:14:20.271: INFO: namespace projected-1702 deletion completed in 6.129844314s • [SLOW TEST:10.387 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:14:20.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-d793efa7-efa4-43c5-8c57-476fe70c3c2e [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:14:20.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3523" for this suite. May 16 13:14:26.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:14:26.477: INFO: namespace secrets-3523 deletion completed in 6.1460945s • [SLOW TEST:6.206 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:14:26.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 16 13:14:30.821: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:14:30.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1245" for this suite. May 16 13:14:36.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:14:37.022: INFO: namespace container-runtime-1245 deletion completed in 6.092312046s • [SLOW TEST:10.545 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:14:37.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6483 STEP: creating a selector STEP: Creating the service pods in kubernetes May 16 13:14:37.123: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 16 13:15:03.228: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.207:8080/dial?request=hostName&protocol=http&host=10.244.1.128&port=8080&tries=1'] Namespace:pod-network-test-6483 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 13:15:03.228: INFO: >>> kubeConfig: /root/.kube/config I0516 13:15:03.255617 6 log.go:172] (0xc002ee3c30) (0xc00180aaa0) Create stream I0516 13:15:03.255661 6 log.go:172] (0xc002ee3c30) (0xc00180aaa0) Stream added, broadcasting: 1 I0516 13:15:03.258401 6 log.go:172] (0xc002ee3c30) Reply frame received for 1 I0516 13:15:03.258439 6 log.go:172] (0xc002ee3c30) (0xc002dd2820) Create stream I0516 13:15:03.258454 6 log.go:172] (0xc002ee3c30) (0xc002dd2820) Stream added, broadcasting: 3 I0516 13:15:03.259416 6 log.go:172] (0xc002ee3c30) Reply frame received for 3 I0516 13:15:03.259467 6 log.go:172] (0xc002ee3c30) (0xc00180ab40) Create stream I0516 13:15:03.259487 6 log.go:172] (0xc002ee3c30) (0xc00180ab40) Stream added, broadcasting: 5 I0516 13:15:03.260339 6 log.go:172] (0xc002ee3c30) Reply frame received for 5 I0516 13:15:03.380353 6 log.go:172] (0xc002ee3c30) Data frame received for 3 I0516 13:15:03.380378 6 log.go:172] (0xc002dd2820) (3) Data frame handling I0516 13:15:03.380392 6 log.go:172] (0xc002dd2820) (3) Data frame sent I0516 13:15:03.380666 6 log.go:172] (0xc002ee3c30) Data frame received for 5 I0516 13:15:03.380684 6 log.go:172] (0xc00180ab40) (5) Data frame handling I0516 13:15:03.380797 6 log.go:172] (0xc002ee3c30) Data frame received for 3 I0516 13:15:03.380811 6 log.go:172] (0xc002dd2820) (3) Data frame handling I0516 13:15:03.383020 6 log.go:172] (0xc002ee3c30) Data frame received for 1 I0516 13:15:03.383039 6 log.go:172] (0xc00180aaa0) (1) Data frame handling I0516 13:15:03.383050 6 log.go:172] (0xc00180aaa0) (1) Data frame sent I0516 13:15:03.383061 6 log.go:172] (0xc002ee3c30) (0xc00180aaa0) Stream removed, broadcasting: 1 I0516 13:15:03.383244 6 log.go:172] (0xc002ee3c30) Go away received I0516 13:15:03.383320 6 log.go:172] (0xc002ee3c30) (0xc00180aaa0) Stream removed, broadcasting: 1 I0516 13:15:03.383338 6 log.go:172] (0xc002ee3c30) (0xc002dd2820) Stream removed, broadcasting: 3 I0516 13:15:03.383359 6 log.go:172] (0xc002ee3c30) (0xc00180ab40) Stream removed, broadcasting: 5 May 16 13:15:03.383: INFO: Waiting for endpoints: map[] May 16 13:15:03.387: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.207:8080/dial?request=hostName&protocol=http&host=10.244.2.206&port=8080&tries=1'] Namespace:pod-network-test-6483 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 13:15:03.387: INFO: >>> kubeConfig: /root/.kube/config I0516 13:15:03.421765 6 log.go:172] (0xc002d4ee70) (0xc00180b360) Create stream I0516 13:15:03.421802 6 log.go:172] (0xc002d4ee70) (0xc00180b360) Stream added, broadcasting: 1 I0516 13:15:03.425372 6 log.go:172] (0xc002d4ee70) Reply frame received for 1 I0516 13:15:03.425440 6 log.go:172] (0xc002d4ee70) (0xc002dd28c0) Create stream I0516 13:15:03.425455 6 log.go:172] (0xc002d4ee70) (0xc002dd28c0) Stream added, broadcasting: 3 I0516 13:15:03.426821 6 log.go:172] (0xc002d4ee70) Reply frame received for 3 I0516 13:15:03.426891 6 log.go:172] (0xc002d4ee70) (0xc001c4aa00) Create stream I0516 13:15:03.426924 6 log.go:172] (0xc002d4ee70) (0xc001c4aa00) Stream added, broadcasting: 5 I0516 13:15:03.428026 6 log.go:172] (0xc002d4ee70) Reply frame received for 5 I0516 13:15:03.492083 6 log.go:172] (0xc002d4ee70) Data frame received for 3 I0516 13:15:03.492190 6 log.go:172] (0xc002dd28c0) (3) Data frame handling I0516 13:15:03.492262 6 log.go:172] (0xc002dd28c0) (3) Data frame sent I0516 13:15:03.492565 6 log.go:172] (0xc002d4ee70) Data frame received for 5 I0516 13:15:03.492592 6 log.go:172] (0xc001c4aa00) (5) Data frame handling I0516 13:15:03.492866 6 log.go:172] (0xc002d4ee70) Data frame received for 3 I0516 13:15:03.492902 6 log.go:172] (0xc002dd28c0) (3) Data frame handling I0516 13:15:03.495037 6 log.go:172] (0xc002d4ee70) Data frame received for 1 I0516 13:15:03.495060 6 log.go:172] (0xc00180b360) (1) Data frame handling I0516 13:15:03.495085 6 log.go:172] (0xc00180b360) (1) Data frame sent I0516 13:15:03.495112 6 log.go:172] (0xc002d4ee70) (0xc00180b360) Stream removed, broadcasting: 1 I0516 13:15:03.495135 6 log.go:172] (0xc002d4ee70) Go away received I0516 13:15:03.495309 6 log.go:172] (0xc002d4ee70) (0xc00180b360) Stream removed, broadcasting: 1 I0516 13:15:03.495333 6 log.go:172] (0xc002d4ee70) (0xc002dd28c0) Stream removed, broadcasting: 3 I0516 13:15:03.495344 6 log.go:172] (0xc002d4ee70) (0xc001c4aa00) Stream removed, broadcasting: 5 May 16 13:15:03.495: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:15:03.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6483" for this suite. May 16 13:15:27.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:15:27.604: INFO: namespace pod-network-test-6483 deletion completed in 24.104880343s • [SLOW TEST:50.582 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:15:27.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 16 13:15:27.685: INFO: Pod name pod-release: Found 0 pods out of 1 May 16 13:15:32.690: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:15:33.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6100" for this suite. May 16 13:15:39.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:15:39.816: INFO: namespace replication-controller-6100 deletion completed in 6.106543137s • [SLOW TEST:12.212 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:15:39.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 13:15:40.098: INFO: Create a RollingUpdate DaemonSet May 16 13:15:40.102: INFO: Check that daemon pods launch on every node of the cluster May 16 13:15:40.123: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 13:15:40.219: INFO: Number of nodes with available pods: 0 May 16 13:15:40.219: INFO: Node iruya-worker is running more than one daemon pod May 16 13:15:41.223: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 13:15:41.226: INFO: Number of nodes with available pods: 0 May 16 13:15:41.226: INFO: Node iruya-worker is running more than one daemon pod May 16 13:15:42.224: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 13:15:42.226: INFO: Number of nodes with available pods: 0 May 16 13:15:42.226: INFO: Node iruya-worker is running more than one daemon pod May 16 13:15:43.223: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 13:15:43.226: INFO: Number of nodes with available pods: 0 May 16 13:15:43.226: INFO: Node iruya-worker is running more than one daemon pod May 16 13:15:44.224: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 13:15:44.228: INFO: Number of nodes with available pods: 1 May 16 13:15:44.228: INFO: Node iruya-worker is running more than one daemon pod May 16 13:15:45.224: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 13:15:45.226: INFO: Number of nodes with available pods: 2 May 16 13:15:45.226: INFO: Number of running nodes: 2, number of available pods: 2 May 16 13:15:45.226: INFO: Update the DaemonSet to trigger a rollout May 16 13:15:45.232: INFO: Updating DaemonSet daemon-set May 16 13:15:52.256: INFO: Roll back the DaemonSet before rollout is complete May 16 13:15:52.263: INFO: Updating DaemonSet daemon-set May 16 13:15:52.263: INFO: Make sure DaemonSet rollback is complete May 16 13:15:52.267: INFO: Wrong image for pod: daemon-set-mpdxq. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 16 13:15:52.267: INFO: Pod daemon-set-mpdxq is not available May 16 13:15:52.288: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 13:15:53.292: INFO: Wrong image for pod: daemon-set-mpdxq. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 16 13:15:53.292: INFO: Pod daemon-set-mpdxq is not available May 16 13:15:53.296: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 13:15:54.453: INFO: Wrong image for pod: daemon-set-mpdxq. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 16 13:15:54.453: INFO: Pod daemon-set-mpdxq is not available May 16 13:15:54.457: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 13:15:55.293: INFO: Wrong image for pod: daemon-set-mpdxq. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 16 13:15:55.293: INFO: Pod daemon-set-mpdxq is not available May 16 13:15:55.297: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 13:15:56.293: INFO: Pod daemon-set-cn8l5 is not available May 16 13:15:56.298: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7615, will wait for the garbage collector to delete the pods May 16 13:15:56.363: INFO: Deleting DaemonSet.extensions daemon-set took: 6.830784ms May 16 13:15:56.663: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.24302ms May 16 13:16:02.267: INFO: Number of nodes with available pods: 0 May 16 13:16:02.267: INFO: Number of running nodes: 0, number of available pods: 0 May 16 13:16:02.272: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7615/daemonsets","resourceVersion":"11213341"},"items":null} May 16 13:16:02.276: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7615/pods","resourceVersion":"11213341"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:16:02.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7615" for this suite. May 16 13:16:08.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:16:08.379: INFO: namespace daemonsets-7615 deletion completed in 6.090398394s • [SLOW TEST:28.562 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:16:08.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-4bdb47ff-d5b9-4e7e-b1c6-8c019479367b STEP: Creating a pod to test consume configMaps May 16 13:16:08.478: INFO: Waiting up to 5m0s for pod "pod-configmaps-b8eaed0d-9c31-45b0-8624-9d30f7fe8a5a" in namespace "configmap-8327" to be "success or failure" May 16 13:16:08.482: INFO: Pod "pod-configmaps-b8eaed0d-9c31-45b0-8624-9d30f7fe8a5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049882ms May 16 13:16:10.487: INFO: Pod "pod-configmaps-b8eaed0d-9c31-45b0-8624-9d30f7fe8a5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00883458s May 16 13:16:12.490: INFO: Pod "pod-configmaps-b8eaed0d-9c31-45b0-8624-9d30f7fe8a5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012565342s STEP: Saw pod success May 16 13:16:12.490: INFO: Pod "pod-configmaps-b8eaed0d-9c31-45b0-8624-9d30f7fe8a5a" satisfied condition "success or failure" May 16 13:16:12.493: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-b8eaed0d-9c31-45b0-8624-9d30f7fe8a5a container configmap-volume-test: STEP: delete the pod May 16 13:16:12.513: INFO: Waiting for pod pod-configmaps-b8eaed0d-9c31-45b0-8624-9d30f7fe8a5a to disappear May 16 13:16:12.518: INFO: Pod pod-configmaps-b8eaed0d-9c31-45b0-8624-9d30f7fe8a5a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:16:12.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8327" for this suite. May 16 13:16:18.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:16:18.616: INFO: namespace configmap-8327 deletion completed in 6.094230644s • [SLOW TEST:10.237 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:16:18.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 16 13:16:24.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-c2d80f61-73f5-4769-9cfa-06d5796ac6bb -c busybox-main-container --namespace=emptydir-8373 -- cat /usr/share/volumeshare/shareddata.txt' May 16 13:16:27.819: INFO: stderr: "I0516 13:16:27.754077 83 log.go:172] (0xc0008b0370) (0xc0002f8a00) Create stream\nI0516 13:16:27.754123 83 log.go:172] (0xc0008b0370) (0xc0002f8a00) Stream added, broadcasting: 1\nI0516 13:16:27.756457 83 log.go:172] (0xc0008b0370) Reply frame received for 1\nI0516 13:16:27.756493 83 log.go:172] (0xc0008b0370) (0xc0002f8aa0) Create stream\nI0516 13:16:27.756508 83 log.go:172] (0xc0008b0370) (0xc0002f8aa0) Stream added, broadcasting: 3\nI0516 13:16:27.757461 83 log.go:172] (0xc0008b0370) Reply frame received for 3\nI0516 13:16:27.757488 83 log.go:172] (0xc0008b0370) (0xc0002f8b40) Create stream\nI0516 13:16:27.757502 83 log.go:172] (0xc0008b0370) (0xc0002f8b40) Stream added, broadcasting: 5\nI0516 13:16:27.758148 83 log.go:172] (0xc0008b0370) Reply frame received for 5\nI0516 13:16:27.812821 83 log.go:172] (0xc0008b0370) Data frame received for 5\nI0516 13:16:27.812850 83 log.go:172] (0xc0002f8b40) (5) Data frame handling\nI0516 13:16:27.812869 83 log.go:172] (0xc0008b0370) Data frame received for 3\nI0516 13:16:27.812874 83 log.go:172] (0xc0002f8aa0) (3) Data frame handling\nI0516 13:16:27.812882 83 log.go:172] (0xc0002f8aa0) (3) Data frame sent\nI0516 13:16:27.812886 83 log.go:172] (0xc0008b0370) Data frame received for 3\nI0516 13:16:27.812890 83 log.go:172] (0xc0002f8aa0) (3) Data frame handling\nI0516 13:16:27.814728 83 log.go:172] (0xc0008b0370) Data frame received for 1\nI0516 13:16:27.814751 83 log.go:172] (0xc0002f8a00) (1) Data frame handling\nI0516 13:16:27.814764 83 log.go:172] (0xc0002f8a00) (1) Data frame sent\nI0516 13:16:27.814782 83 log.go:172] (0xc0008b0370) (0xc0002f8a00) Stream removed, broadcasting: 1\nI0516 13:16:27.814798 83 log.go:172] (0xc0008b0370) Go away received\nI0516 13:16:27.815240 83 log.go:172] (0xc0008b0370) (0xc0002f8a00) Stream removed, broadcasting: 1\nI0516 13:16:27.815268 83 log.go:172] (0xc0008b0370) (0xc0002f8aa0) Stream removed, broadcasting: 3\nI0516 13:16:27.815276 83 log.go:172] (0xc0008b0370) (0xc0002f8b40) Stream removed, broadcasting: 5\n" May 16 13:16:27.820: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:16:27.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8373" for this suite. May 16 13:16:33.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:16:33.919: INFO: namespace emptydir-8373 deletion completed in 6.095124342s • [SLOW TEST:15.302 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:16:33.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 13:16:34.019: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:16:38.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2722" for this suite. May 16 13:17:24.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:17:24.251: INFO: namespace pods-2722 deletion completed in 46.095693269s • [SLOW TEST:50.332 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:17:24.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller May 16 13:17:24.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2369' May 16 13:17:24.591: INFO: stderr: "" May 16 13:17:24.591: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 16 13:17:24.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2369' May 16 13:17:24.708: INFO: stderr: "" May 16 13:17:24.708: INFO: stdout: "update-demo-nautilus-ccgpp update-demo-nautilus-d74vw " May 16 13:17:24.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ccgpp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2369' May 16 13:17:24.827: INFO: stderr: "" May 16 13:17:24.827: INFO: stdout: "" May 16 13:17:24.827: INFO: update-demo-nautilus-ccgpp is created but not running May 16 13:17:29.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2369' May 16 13:17:29.921: INFO: stderr: "" May 16 13:17:29.921: INFO: stdout: "update-demo-nautilus-ccgpp update-demo-nautilus-d74vw " May 16 13:17:29.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ccgpp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2369' May 16 13:17:30.009: INFO: stderr: "" May 16 13:17:30.009: INFO: stdout: "true" May 16 13:17:30.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ccgpp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2369' May 16 13:17:30.101: INFO: stderr: "" May 16 13:17:30.101: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 13:17:30.101: INFO: validating pod update-demo-nautilus-ccgpp May 16 13:17:30.105: INFO: got data: { "image": "nautilus.jpg" } May 16 13:17:30.106: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 13:17:30.106: INFO: update-demo-nautilus-ccgpp is verified up and running May 16 13:17:30.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d74vw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2369' May 16 13:17:30.200: INFO: stderr: "" May 16 13:17:30.200: INFO: stdout: "true" May 16 13:17:30.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d74vw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2369' May 16 13:17:30.290: INFO: stderr: "" May 16 13:17:30.290: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 13:17:30.290: INFO: validating pod update-demo-nautilus-d74vw May 16 13:17:30.294: INFO: got data: { "image": "nautilus.jpg" } May 16 13:17:30.294: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 13:17:30.294: INFO: update-demo-nautilus-d74vw is verified up and running STEP: rolling-update to new replication controller May 16 13:17:30.295: INFO: scanned /root for discovery docs: May 16 13:17:30.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-2369' May 16 13:17:52.955: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 16 13:17:52.955: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 16 13:17:52.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2369' May 16 13:17:53.055: INFO: stderr: "" May 16 13:17:53.055: INFO: stdout: "update-demo-kitten-kz2vm update-demo-kitten-t66dd " May 16 13:17:53.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kz2vm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2369' May 16 13:17:53.164: INFO: stderr: "" May 16 13:17:53.164: INFO: stdout: "true" May 16 13:17:53.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kz2vm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2369' May 16 13:17:53.266: INFO: stderr: "" May 16 13:17:53.266: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 16 13:17:53.266: INFO: validating pod update-demo-kitten-kz2vm May 16 13:17:53.269: INFO: got data: { "image": "kitten.jpg" } May 16 13:17:53.269: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 16 13:17:53.269: INFO: update-demo-kitten-kz2vm is verified up and running May 16 13:17:53.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t66dd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2369' May 16 13:17:53.394: INFO: stderr: "" May 16 13:17:53.394: INFO: stdout: "true" May 16 13:17:53.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t66dd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2369' May 16 13:17:53.494: INFO: stderr: "" May 16 13:17:53.494: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 16 13:17:53.494: INFO: validating pod update-demo-kitten-t66dd May 16 13:17:53.498: INFO: got data: { "image": "kitten.jpg" } May 16 13:17:53.498: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 16 13:17:53.498: INFO: update-demo-kitten-t66dd is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:17:53.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2369" for this suite. May 16 13:18:15.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:18:15.589: INFO: namespace kubectl-2369 deletion completed in 22.087243314s • [SLOW TEST:51.338 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:18:15.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-6181 I0516 13:18:15.694937 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6181, replica count: 1 I0516 13:18:16.745514 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 13:18:17.745726 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 13:18:18.745928 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 13:18:19.746155 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 16 13:18:19.893: INFO: Created: latency-svc-5cwjp May 16 13:18:19.934: INFO: Got endpoints: latency-svc-5cwjp [88.238423ms] May 16 13:18:19.974: INFO: Created: latency-svc-h8qzb May 16 13:18:20.010: INFO: Got endpoints: latency-svc-h8qzb [75.140912ms] May 16 13:18:20.084: INFO: Created: latency-svc-j4l9t May 16 13:18:20.091: INFO: Got endpoints: latency-svc-j4l9t [156.634994ms] May 16 13:18:20.115: INFO: Created: latency-svc-vr8tv May 16 13:18:20.127: INFO: Got endpoints: latency-svc-vr8tv [193.028403ms] May 16 13:18:20.156: INFO: Created: latency-svc-w64ff May 16 13:18:20.176: INFO: Got endpoints: latency-svc-w64ff [241.125855ms] May 16 13:18:20.233: INFO: Created: latency-svc-gkv9j May 16 13:18:20.237: INFO: Got endpoints: latency-svc-gkv9j [302.559241ms] May 16 13:18:20.262: INFO: Created: latency-svc-nhmb2 May 16 13:18:20.270: INFO: Got endpoints: latency-svc-nhmb2 [335.689937ms] May 16 13:18:20.292: INFO: Created: latency-svc-gw8hp May 16 13:18:20.300: INFO: Got endpoints: latency-svc-gw8hp [365.939179ms] May 16 13:18:20.330: INFO: Created: latency-svc-qp6j6 May 16 13:18:20.382: INFO: Got endpoints: latency-svc-qp6j6 [447.577485ms] May 16 13:18:20.385: INFO: Created: latency-svc-ckrmb May 16 13:18:20.418: INFO: Got endpoints: latency-svc-ckrmb [483.2171ms] May 16 13:18:20.466: INFO: Created: latency-svc-nndfk May 16 13:18:20.516: INFO: Got endpoints: latency-svc-nndfk [581.689603ms] May 16 13:18:20.535: INFO: Created: latency-svc-bk9cp May 16 13:18:20.548: INFO: Got endpoints: latency-svc-bk9cp [613.277907ms] May 16 13:18:20.565: INFO: Created: latency-svc-lzlvc May 16 13:18:20.574: INFO: Got endpoints: latency-svc-lzlvc [639.863719ms] May 16 13:18:20.597: INFO: Created: latency-svc-r5zcx May 16 13:18:20.676: INFO: Got endpoints: latency-svc-r5zcx [741.278894ms] May 16 13:18:20.678: INFO: Created: latency-svc-6b9ms May 16 13:18:20.688: INFO: Got endpoints: latency-svc-6b9ms [753.095968ms] May 16 13:18:20.709: INFO: Created: latency-svc-mdsx4 May 16 13:18:20.743: INFO: Got endpoints: latency-svc-mdsx4 [808.145071ms] May 16 13:18:20.775: INFO: Created: latency-svc-xqs55 May 16 13:18:20.844: INFO: Got endpoints: latency-svc-xqs55 [834.045823ms] May 16 13:18:20.848: INFO: Created: latency-svc-kckvk May 16 13:18:20.856: INFO: Got endpoints: latency-svc-kckvk [764.94886ms] May 16 13:18:20.880: INFO: Created: latency-svc-2zpcs May 16 13:18:20.893: INFO: Got endpoints: latency-svc-2zpcs [765.68572ms] May 16 13:18:20.916: INFO: Created: latency-svc-knncv May 16 13:18:20.929: INFO: Got endpoints: latency-svc-knncv [753.479118ms] May 16 13:18:20.988: INFO: Created: latency-svc-x2cs4 May 16 13:18:20.991: INFO: Got endpoints: latency-svc-x2cs4 [753.588304ms] May 16 13:18:21.014: INFO: Created: latency-svc-x44b7 May 16 13:18:21.030: INFO: Got endpoints: latency-svc-x44b7 [759.987693ms] May 16 13:18:21.084: INFO: Created: latency-svc-7w2wn May 16 13:18:21.126: INFO: Got endpoints: latency-svc-7w2wn [825.259257ms] May 16 13:18:21.144: INFO: Created: latency-svc-6g8k2 May 16 13:18:21.157: INFO: Got endpoints: latency-svc-6g8k2 [774.59636ms] May 16 13:18:21.183: INFO: Created: latency-svc-8j8xx May 16 13:18:21.198: INFO: Got endpoints: latency-svc-8j8xx [780.583397ms] May 16 13:18:21.218: INFO: Created: latency-svc-sxlqc May 16 13:18:21.263: INFO: Got endpoints: latency-svc-sxlqc [746.3427ms] May 16 13:18:21.282: INFO: Created: latency-svc-5zvsf May 16 13:18:21.295: INFO: Got endpoints: latency-svc-5zvsf [746.961332ms] May 16 13:18:21.318: INFO: Created: latency-svc-6j7zp May 16 13:18:21.331: INFO: Got endpoints: latency-svc-6j7zp [756.754358ms] May 16 13:18:21.425: INFO: Created: latency-svc-wcnhv May 16 13:18:21.428: INFO: Got endpoints: latency-svc-wcnhv [752.250488ms] May 16 13:18:21.452: INFO: Created: latency-svc-xwk4r May 16 13:18:21.471: INFO: Got endpoints: latency-svc-xwk4r [782.913168ms] May 16 13:18:21.490: INFO: Created: latency-svc-wzg8d May 16 13:18:21.507: INFO: Got endpoints: latency-svc-wzg8d [764.320651ms] May 16 13:18:21.557: INFO: Created: latency-svc-v64ns May 16 13:18:21.582: INFO: Got endpoints: latency-svc-v64ns [738.299134ms] May 16 13:18:21.583: INFO: Created: latency-svc-j98sx May 16 13:18:21.591: INFO: Got endpoints: latency-svc-j98sx [734.463523ms] May 16 13:18:21.621: INFO: Created: latency-svc-mb986 May 16 13:18:21.742: INFO: Got endpoints: latency-svc-mb986 [848.617106ms] May 16 13:18:21.744: INFO: Created: latency-svc-9cpxg May 16 13:18:21.754: INFO: Got endpoints: latency-svc-9cpxg [824.733173ms] May 16 13:18:21.786: INFO: Created: latency-svc-zc7s8 May 16 13:18:21.795: INFO: Got endpoints: latency-svc-zc7s8 [804.676651ms] May 16 13:18:21.822: INFO: Created: latency-svc-b755z May 16 13:18:21.838: INFO: Got endpoints: latency-svc-b755z [807.577802ms] May 16 13:18:21.935: INFO: Created: latency-svc-sdbzs May 16 13:18:21.977: INFO: Got endpoints: latency-svc-sdbzs [851.509043ms] May 16 13:18:22.020: INFO: Created: latency-svc-l6j8s May 16 13:18:22.032: INFO: Got endpoints: latency-svc-l6j8s [874.841796ms] May 16 13:18:22.083: INFO: Created: latency-svc-5lb97 May 16 13:18:22.112: INFO: Got endpoints: latency-svc-5lb97 [913.896139ms] May 16 13:18:22.113: INFO: Created: latency-svc-4wpwl May 16 13:18:22.122: INFO: Got endpoints: latency-svc-4wpwl [859.245828ms] May 16 13:18:22.151: INFO: Created: latency-svc-m6vpf May 16 13:18:22.165: INFO: Got endpoints: latency-svc-m6vpf [869.567665ms] May 16 13:18:22.246: INFO: Created: latency-svc-q9d9d May 16 13:18:22.249: INFO: Got endpoints: latency-svc-q9d9d [918.016865ms] May 16 13:18:22.280: INFO: Created: latency-svc-lb599 May 16 13:18:22.297: INFO: Got endpoints: latency-svc-lb599 [868.623715ms] May 16 13:18:22.323: INFO: Created: latency-svc-9vt8j May 16 13:18:22.333: INFO: Got endpoints: latency-svc-9vt8j [862.63734ms] May 16 13:18:22.386: INFO: Created: latency-svc-bspr9 May 16 13:18:22.421: INFO: Got endpoints: latency-svc-bspr9 [913.85896ms] May 16 13:18:22.509: INFO: Created: latency-svc-25b5d May 16 13:18:22.512: INFO: Got endpoints: latency-svc-25b5d [929.567875ms] May 16 13:18:22.551: INFO: Created: latency-svc-zdf5f May 16 13:18:22.562: INFO: Got endpoints: latency-svc-zdf5f [971.562911ms] May 16 13:18:22.586: INFO: Created: latency-svc-58qjq May 16 13:18:22.646: INFO: Got endpoints: latency-svc-58qjq [904.052213ms] May 16 13:18:22.661: INFO: Created: latency-svc-2dv2h May 16 13:18:22.676: INFO: Got endpoints: latency-svc-2dv2h [922.271459ms] May 16 13:18:22.707: INFO: Created: latency-svc-vcmz6 May 16 13:18:22.718: INFO: Got endpoints: latency-svc-vcmz6 [922.796461ms] May 16 13:18:22.802: INFO: Created: latency-svc-664s8 May 16 13:18:22.818: INFO: Got endpoints: latency-svc-664s8 [980.397587ms] May 16 13:18:22.854: INFO: Created: latency-svc-94jsv May 16 13:18:22.869: INFO: Got endpoints: latency-svc-94jsv [891.66345ms] May 16 13:18:22.889: INFO: Created: latency-svc-tmjnj May 16 13:18:22.946: INFO: Got endpoints: latency-svc-tmjnj [913.78456ms] May 16 13:18:22.976: INFO: Created: latency-svc-48vh5 May 16 13:18:22.990: INFO: Got endpoints: latency-svc-48vh5 [877.134659ms] May 16 13:18:23.013: INFO: Created: latency-svc-s8xcl May 16 13:18:23.026: INFO: Got endpoints: latency-svc-s8xcl [903.787989ms] May 16 13:18:23.084: INFO: Created: latency-svc-w72bg May 16 13:18:23.086: INFO: Got endpoints: latency-svc-w72bg [921.535773ms] May 16 13:18:23.111: INFO: Created: latency-svc-zpvpk May 16 13:18:23.128: INFO: Got endpoints: latency-svc-zpvpk [878.961473ms] May 16 13:18:23.166: INFO: Created: latency-svc-s9dzc May 16 13:18:23.245: INFO: Got endpoints: latency-svc-s9dzc [947.641627ms] May 16 13:18:23.265: INFO: Created: latency-svc-l75gp May 16 13:18:23.291: INFO: Got endpoints: latency-svc-l75gp [957.480269ms] May 16 13:18:23.334: INFO: Created: latency-svc-wvpr7 May 16 13:18:23.395: INFO: Got endpoints: latency-svc-wvpr7 [973.68624ms] May 16 13:18:23.398: INFO: Created: latency-svc-m8fbr May 16 13:18:23.407: INFO: Got endpoints: latency-svc-m8fbr [894.965664ms] May 16 13:18:23.432: INFO: Created: latency-svc-rg8xq May 16 13:18:23.448: INFO: Got endpoints: latency-svc-rg8xq [885.385077ms] May 16 13:18:23.474: INFO: Created: latency-svc-qrgdc May 16 13:18:23.484: INFO: Got endpoints: latency-svc-qrgdc [837.624894ms] May 16 13:18:23.545: INFO: Created: latency-svc-h2pzg May 16 13:18:23.548: INFO: Got endpoints: latency-svc-h2pzg [871.606348ms] May 16 13:18:23.580: INFO: Created: latency-svc-zfhzj May 16 13:18:23.592: INFO: Got endpoints: latency-svc-zfhzj [873.816814ms] May 16 13:18:23.624: INFO: Created: latency-svc-w6bjw May 16 13:18:23.635: INFO: Got endpoints: latency-svc-w6bjw [816.093215ms] May 16 13:18:23.695: INFO: Created: latency-svc-9czq2 May 16 13:18:23.700: INFO: Got endpoints: latency-svc-9czq2 [831.303831ms] May 16 13:18:23.723: INFO: Created: latency-svc-9x444 May 16 13:18:23.741: INFO: Got endpoints: latency-svc-9x444 [106.512493ms] May 16 13:18:23.763: INFO: Created: latency-svc-fp9kp May 16 13:18:23.839: INFO: Got endpoints: latency-svc-fp9kp [892.929207ms] May 16 13:18:23.858: INFO: Created: latency-svc-tcggf May 16 13:18:23.869: INFO: Got endpoints: latency-svc-tcggf [879.820095ms] May 16 13:18:23.900: INFO: Created: latency-svc-hf7v4 May 16 13:18:23.911: INFO: Got endpoints: latency-svc-hf7v4 [885.56291ms] May 16 13:18:23.976: INFO: Created: latency-svc-cv6v9 May 16 13:18:23.979: INFO: Got endpoints: latency-svc-cv6v9 [892.539804ms] May 16 13:18:24.030: INFO: Created: latency-svc-cxcg8 May 16 13:18:24.046: INFO: Got endpoints: latency-svc-cxcg8 [917.690008ms] May 16 13:18:24.120: INFO: Created: latency-svc-fnr68 May 16 13:18:24.146: INFO: Got endpoints: latency-svc-fnr68 [901.404501ms] May 16 13:18:24.147: INFO: Created: latency-svc-hvnlp May 16 13:18:24.173: INFO: Got endpoints: latency-svc-hvnlp [881.975519ms] May 16 13:18:24.219: INFO: Created: latency-svc-s2vqs May 16 13:18:24.281: INFO: Got endpoints: latency-svc-s2vqs [886.219038ms] May 16 13:18:24.283: INFO: Created: latency-svc-8vg27 May 16 13:18:24.285: INFO: Got endpoints: latency-svc-8vg27 [878.318646ms] May 16 13:18:24.316: INFO: Created: latency-svc-ksz6f May 16 13:18:24.328: INFO: Got endpoints: latency-svc-ksz6f [879.87231ms] May 16 13:18:24.366: INFO: Created: latency-svc-54j6f May 16 13:18:24.448: INFO: Got endpoints: latency-svc-54j6f [964.696461ms] May 16 13:18:24.451: INFO: Created: latency-svc-8swcw May 16 13:18:24.479: INFO: Got endpoints: latency-svc-8swcw [930.605789ms] May 16 13:18:24.518: INFO: Created: latency-svc-7t5gc May 16 13:18:24.533: INFO: Got endpoints: latency-svc-7t5gc [941.078303ms] May 16 13:18:24.610: INFO: Created: latency-svc-frvt5 May 16 13:18:24.613: INFO: Got endpoints: latency-svc-frvt5 [912.782661ms] May 16 13:18:24.641: INFO: Created: latency-svc-4xlbf May 16 13:18:24.659: INFO: Got endpoints: latency-svc-4xlbf [917.606987ms] May 16 13:18:24.677: INFO: Created: latency-svc-ph4cv May 16 13:18:24.742: INFO: Got endpoints: latency-svc-ph4cv [903.014713ms] May 16 13:18:24.752: INFO: Created: latency-svc-wd2vs May 16 13:18:24.774: INFO: Got endpoints: latency-svc-wd2vs [904.07122ms] May 16 13:18:24.822: INFO: Created: latency-svc-tkswm May 16 13:18:24.833: INFO: Got endpoints: latency-svc-tkswm [921.95724ms] May 16 13:18:24.904: INFO: Created: latency-svc-qfh7f May 16 13:18:24.907: INFO: Got endpoints: latency-svc-qfh7f [927.882934ms] May 16 13:18:24.938: INFO: Created: latency-svc-6qnsd May 16 13:18:24.954: INFO: Got endpoints: latency-svc-6qnsd [907.944325ms] May 16 13:18:24.980: INFO: Created: latency-svc-nfz8m May 16 13:18:24.998: INFO: Got endpoints: latency-svc-nfz8m [852.016571ms] May 16 13:18:25.047: INFO: Created: latency-svc-lm4lf May 16 13:18:25.051: INFO: Got endpoints: latency-svc-lm4lf [878.516599ms] May 16 13:18:25.079: INFO: Created: latency-svc-cvbv9 May 16 13:18:25.093: INFO: Got endpoints: latency-svc-cvbv9 [812.148385ms] May 16 13:18:25.119: INFO: Created: latency-svc-r27kq May 16 13:18:25.136: INFO: Got endpoints: latency-svc-r27kq [850.72194ms] May 16 13:18:25.185: INFO: Created: latency-svc-wvpdj May 16 13:18:25.189: INFO: Got endpoints: latency-svc-wvpdj [861.806093ms] May 16 13:18:25.214: INFO: Created: latency-svc-s5tns May 16 13:18:25.232: INFO: Got endpoints: latency-svc-s5tns [783.60456ms] May 16 13:18:25.283: INFO: Created: latency-svc-n8z78 May 16 13:18:25.352: INFO: Got endpoints: latency-svc-n8z78 [873.82756ms] May 16 13:18:25.358: INFO: Created: latency-svc-6r7fr May 16 13:18:25.382: INFO: Got endpoints: latency-svc-6r7fr [849.043142ms] May 16 13:18:25.407: INFO: Created: latency-svc-b58dw May 16 13:18:25.419: INFO: Got endpoints: latency-svc-b58dw [805.300088ms] May 16 13:18:25.451: INFO: Created: latency-svc-8ttns May 16 13:18:25.485: INFO: Got endpoints: latency-svc-8ttns [826.065586ms] May 16 13:18:25.508: INFO: Created: latency-svc-qmcbd May 16 13:18:25.521: INFO: Got endpoints: latency-svc-qmcbd [779.76376ms] May 16 13:18:25.541: INFO: Created: latency-svc-5mjmd May 16 13:18:25.557: INFO: Got endpoints: latency-svc-5mjmd [783.497152ms] May 16 13:18:25.641: INFO: Created: latency-svc-4kl5v May 16 13:18:25.648: INFO: Got endpoints: latency-svc-4kl5v [813.999004ms] May 16 13:18:25.688: INFO: Created: latency-svc-sshsr May 16 13:18:25.702: INFO: Got endpoints: latency-svc-sshsr [794.941104ms] May 16 13:18:25.727: INFO: Created: latency-svc-tp979 May 16 13:18:25.790: INFO: Got endpoints: latency-svc-tp979 [835.744729ms] May 16 13:18:25.793: INFO: Created: latency-svc-9rczm May 16 13:18:25.798: INFO: Got endpoints: latency-svc-9rczm [799.905939ms] May 16 13:18:25.820: INFO: Created: latency-svc-r4hmm May 16 13:18:25.828: INFO: Got endpoints: latency-svc-r4hmm [776.689456ms] May 16 13:18:25.856: INFO: Created: latency-svc-m6v69 May 16 13:18:25.934: INFO: Got endpoints: latency-svc-m6v69 [840.560971ms] May 16 13:18:25.948: INFO: Created: latency-svc-ps94c May 16 13:18:25.967: INFO: Got endpoints: latency-svc-ps94c [831.69147ms] May 16 13:18:26.127: INFO: Created: latency-svc-jbf4g May 16 13:18:26.130: INFO: Got endpoints: latency-svc-jbf4g [940.316886ms] May 16 13:18:26.168: INFO: Created: latency-svc-r7ngv May 16 13:18:26.181: INFO: Got endpoints: latency-svc-r7ngv [949.241145ms] May 16 13:18:26.204: INFO: Created: latency-svc-4zcl8 May 16 13:18:26.217: INFO: Got endpoints: latency-svc-4zcl8 [864.447701ms] May 16 13:18:26.275: INFO: Created: latency-svc-x5s74 May 16 13:18:26.283: INFO: Got endpoints: latency-svc-x5s74 [901.007211ms] May 16 13:18:26.309: INFO: Created: latency-svc-5dxb2 May 16 13:18:26.326: INFO: Got endpoints: latency-svc-5dxb2 [907.0765ms] May 16 13:18:26.348: INFO: Created: latency-svc-bpbpf May 16 13:18:26.362: INFO: Got endpoints: latency-svc-bpbpf [877.124633ms] May 16 13:18:26.419: INFO: Created: latency-svc-krgwb May 16 13:18:26.422: INFO: Got endpoints: latency-svc-krgwb [900.726182ms] May 16 13:18:26.458: INFO: Created: latency-svc-hjtq5 May 16 13:18:26.477: INFO: Got endpoints: latency-svc-hjtq5 [919.523059ms] May 16 13:18:26.511: INFO: Created: latency-svc-b5tzp May 16 13:18:26.556: INFO: Got endpoints: latency-svc-b5tzp [908.482951ms] May 16 13:18:26.564: INFO: Created: latency-svc-ljfgc May 16 13:18:26.579: INFO: Got endpoints: latency-svc-ljfgc [877.163177ms] May 16 13:18:26.600: INFO: Created: latency-svc-n4dbg May 16 13:18:26.609: INFO: Got endpoints: latency-svc-n4dbg [819.235282ms] May 16 13:18:26.630: INFO: Created: latency-svc-7rj7j May 16 13:18:26.640: INFO: Got endpoints: latency-svc-7rj7j [841.50352ms] May 16 13:18:26.694: INFO: Created: latency-svc-7xd8n May 16 13:18:26.706: INFO: Got endpoints: latency-svc-7xd8n [877.616239ms] May 16 13:18:26.746: INFO: Created: latency-svc-x5plm May 16 13:18:26.760: INFO: Got endpoints: latency-svc-x5plm [826.387598ms] May 16 13:18:26.792: INFO: Created: latency-svc-ljdb4 May 16 13:18:26.832: INFO: Got endpoints: latency-svc-ljdb4 [864.276647ms] May 16 13:18:26.863: INFO: Created: latency-svc-fkj8k May 16 13:18:26.881: INFO: Got endpoints: latency-svc-fkj8k [751.106238ms] May 16 13:18:26.909: INFO: Created: latency-svc-5x5p2 May 16 13:18:26.970: INFO: Got endpoints: latency-svc-5x5p2 [788.170992ms] May 16 13:18:26.982: INFO: Created: latency-svc-2mvsx May 16 13:18:27.013: INFO: Got endpoints: latency-svc-2mvsx [796.221943ms] May 16 13:18:27.044: INFO: Created: latency-svc-8t85n May 16 13:18:27.062: INFO: Got endpoints: latency-svc-8t85n [778.058753ms] May 16 13:18:27.120: INFO: Created: latency-svc-jvwtf May 16 13:18:27.128: INFO: Got endpoints: latency-svc-jvwtf [802.198572ms] May 16 13:18:27.154: INFO: Created: latency-svc-k6vk7 May 16 13:18:27.165: INFO: Got endpoints: latency-svc-k6vk7 [802.969879ms] May 16 13:18:27.185: INFO: Created: latency-svc-zvsrr May 16 13:18:27.269: INFO: Got endpoints: latency-svc-zvsrr [847.083584ms] May 16 13:18:27.277: INFO: Created: latency-svc-dvvrc May 16 13:18:27.290: INFO: Got endpoints: latency-svc-dvvrc [813.565661ms] May 16 13:18:27.313: INFO: Created: latency-svc-fvtw7 May 16 13:18:27.327: INFO: Got endpoints: latency-svc-fvtw7 [770.483343ms] May 16 13:18:27.356: INFO: Created: latency-svc-7d6vj May 16 13:18:27.412: INFO: Got endpoints: latency-svc-7d6vj [833.095849ms] May 16 13:18:27.416: INFO: Created: latency-svc-6l8m6 May 16 13:18:27.424: INFO: Got endpoints: latency-svc-6l8m6 [814.424185ms] May 16 13:18:27.449: INFO: Created: latency-svc-9wqxl May 16 13:18:27.466: INFO: Got endpoints: latency-svc-9wqxl [826.151488ms] May 16 13:18:27.488: INFO: Created: latency-svc-4m4sq May 16 13:18:27.581: INFO: Got endpoints: latency-svc-4m4sq [874.851894ms] May 16 13:18:27.583: INFO: Created: latency-svc-lpwhq May 16 13:18:27.592: INFO: Got endpoints: latency-svc-lpwhq [832.050969ms] May 16 13:18:27.629: INFO: Created: latency-svc-sk4lv May 16 13:18:27.653: INFO: Got endpoints: latency-svc-sk4lv [821.185354ms] May 16 13:18:27.710: INFO: Created: latency-svc-mjwj4 May 16 13:18:27.725: INFO: Got endpoints: latency-svc-mjwj4 [843.924924ms] May 16 13:18:27.746: INFO: Created: latency-svc-z8rms May 16 13:18:27.761: INFO: Got endpoints: latency-svc-z8rms [791.609093ms] May 16 13:18:27.782: INFO: Created: latency-svc-jnhn8 May 16 13:18:27.844: INFO: Got endpoints: latency-svc-jnhn8 [830.437452ms] May 16 13:18:27.856: INFO: Created: latency-svc-57fv9 May 16 13:18:27.870: INFO: Got endpoints: latency-svc-57fv9 [808.405852ms] May 16 13:18:27.911: INFO: Created: latency-svc-89pqf May 16 13:18:27.924: INFO: Got endpoints: latency-svc-89pqf [795.916908ms] May 16 13:18:28.000: INFO: Created: latency-svc-qvn6l May 16 13:18:28.002: INFO: Got endpoints: latency-svc-qvn6l [837.281619ms] May 16 13:18:28.028: INFO: Created: latency-svc-r66g6 May 16 13:18:28.038: INFO: Got endpoints: latency-svc-r66g6 [769.044756ms] May 16 13:18:28.211: INFO: Created: latency-svc-7pclr May 16 13:18:28.307: INFO: Got endpoints: latency-svc-7pclr [1.016322023s] May 16 13:18:28.307: INFO: Created: latency-svc-lbr2q May 16 13:18:28.403: INFO: Got endpoints: latency-svc-lbr2q [1.076317876s] May 16 13:18:28.479: INFO: Created: latency-svc-742rh May 16 13:18:28.534: INFO: Got endpoints: latency-svc-742rh [1.121950362s] May 16 13:18:28.577: INFO: Created: latency-svc-dx6lz May 16 13:18:28.604: INFO: Got endpoints: latency-svc-dx6lz [1.179756923s] May 16 13:18:28.631: INFO: Created: latency-svc-hdg89 May 16 13:18:28.683: INFO: Got endpoints: latency-svc-hdg89 [1.217298629s] May 16 13:18:28.712: INFO: Created: latency-svc-8w45j May 16 13:18:28.742: INFO: Got endpoints: latency-svc-8w45j [1.161395087s] May 16 13:18:28.766: INFO: Created: latency-svc-5mpnc May 16 13:18:28.778: INFO: Got endpoints: latency-svc-5mpnc [1.185212027s] May 16 13:18:28.826: INFO: Created: latency-svc-fpqk7 May 16 13:18:28.832: INFO: Got endpoints: latency-svc-fpqk7 [1.178637418s] May 16 13:18:28.852: INFO: Created: latency-svc-t958j May 16 13:18:28.868: INFO: Got endpoints: latency-svc-t958j [1.143305183s] May 16 13:18:28.889: INFO: Created: latency-svc-h8jdr May 16 13:18:28.905: INFO: Got endpoints: latency-svc-h8jdr [1.143245735s] May 16 13:18:28.970: INFO: Created: latency-svc-jdm54 May 16 13:18:28.993: INFO: Got endpoints: latency-svc-jdm54 [1.149537753s] May 16 13:18:29.023: INFO: Created: latency-svc-vsjdn May 16 13:18:29.037: INFO: Got endpoints: latency-svc-vsjdn [1.167073736s] May 16 13:18:29.056: INFO: Created: latency-svc-wn9fc May 16 13:18:29.119: INFO: Got endpoints: latency-svc-wn9fc [1.194866013s] May 16 13:18:29.122: INFO: Created: latency-svc-2rzhw May 16 13:18:29.128: INFO: Got endpoints: latency-svc-2rzhw [1.125347158s] May 16 13:18:29.153: INFO: Created: latency-svc-j8k87 May 16 13:18:29.170: INFO: Got endpoints: latency-svc-j8k87 [1.131593554s] May 16 13:18:29.206: INFO: Created: latency-svc-phvhs May 16 13:18:29.213: INFO: Got endpoints: latency-svc-phvhs [906.297829ms] May 16 13:18:29.258: INFO: Created: latency-svc-9b6mw May 16 13:18:29.279: INFO: Got endpoints: latency-svc-9b6mw [875.462202ms] May 16 13:18:29.315: INFO: Created: latency-svc-hmlln May 16 13:18:29.328: INFO: Got endpoints: latency-svc-hmlln [793.272658ms] May 16 13:18:29.383: INFO: Created: latency-svc-jbffb May 16 13:18:29.387: INFO: Got endpoints: latency-svc-jbffb [783.014947ms] May 16 13:18:29.419: INFO: Created: latency-svc-vkzh7 May 16 13:18:29.435: INFO: Got endpoints: latency-svc-vkzh7 [752.076058ms] May 16 13:18:29.455: INFO: Created: latency-svc-kmm9j May 16 13:18:29.465: INFO: Got endpoints: latency-svc-kmm9j [723.058824ms] May 16 13:18:29.532: INFO: Created: latency-svc-hkgmt May 16 13:18:29.561: INFO: Got endpoints: latency-svc-hkgmt [782.999277ms] May 16 13:18:29.561: INFO: Created: latency-svc-8f74w May 16 13:18:29.574: INFO: Got endpoints: latency-svc-8f74w [742.287627ms] May 16 13:18:29.594: INFO: Created: latency-svc-xqkrn May 16 13:18:29.611: INFO: Got endpoints: latency-svc-xqkrn [742.611967ms] May 16 13:18:29.666: INFO: Created: latency-svc-n9kvh May 16 13:18:29.670: INFO: Got endpoints: latency-svc-n9kvh [765.246599ms] May 16 13:18:29.713: INFO: Created: latency-svc-8sc2j May 16 13:18:29.725: INFO: Got endpoints: latency-svc-8sc2j [731.192563ms] May 16 13:18:29.758: INFO: Created: latency-svc-8sjjh May 16 13:18:29.820: INFO: Got endpoints: latency-svc-8sjjh [782.550182ms] May 16 13:18:29.857: INFO: Created: latency-svc-jlxtm May 16 13:18:29.869: INFO: Got endpoints: latency-svc-jlxtm [750.484682ms] May 16 13:18:29.893: INFO: Created: latency-svc-g7ph7 May 16 13:18:29.906: INFO: Got endpoints: latency-svc-g7ph7 [778.280287ms] May 16 13:18:29.946: INFO: Created: latency-svc-f8jnp May 16 13:18:29.948: INFO: Got endpoints: latency-svc-f8jnp [777.827105ms] May 16 13:18:30.029: INFO: Created: latency-svc-g45p8 May 16 13:18:30.155: INFO: Got endpoints: latency-svc-g45p8 [942.005978ms] May 16 13:18:30.188: INFO: Created: latency-svc-vlk2t May 16 13:18:30.195: INFO: Got endpoints: latency-svc-vlk2t [916.591266ms] May 16 13:18:30.220: INFO: Created: latency-svc-6cts7 May 16 13:18:30.311: INFO: Got endpoints: latency-svc-6cts7 [982.973707ms] May 16 13:18:30.320: INFO: Created: latency-svc-hhds6 May 16 13:18:30.333: INFO: Got endpoints: latency-svc-hhds6 [946.502371ms] May 16 13:18:30.386: INFO: Created: latency-svc-vtt2f May 16 13:18:30.399: INFO: Got endpoints: latency-svc-vtt2f [963.700715ms] May 16 13:18:30.461: INFO: Created: latency-svc-wdlc6 May 16 13:18:30.484: INFO: Got endpoints: latency-svc-wdlc6 [1.01816835s] May 16 13:18:30.508: INFO: Created: latency-svc-zf2ll May 16 13:18:30.526: INFO: Got endpoints: latency-svc-zf2ll [965.224409ms] May 16 13:18:30.597: INFO: Created: latency-svc-8vkwq May 16 13:18:30.610: INFO: Got endpoints: latency-svc-8vkwq [1.035797047s] May 16 13:18:30.659: INFO: Created: latency-svc-rzh6b May 16 13:18:30.724: INFO: Got endpoints: latency-svc-rzh6b [1.112826598s] May 16 13:18:30.743: INFO: Created: latency-svc-htdfc May 16 13:18:30.760: INFO: Got endpoints: latency-svc-htdfc [1.090528239s] May 16 13:18:30.787: INFO: Created: latency-svc-nk69x May 16 13:18:30.815: INFO: Got endpoints: latency-svc-nk69x [1.090128712s] May 16 13:18:30.859: INFO: Created: latency-svc-945qc May 16 13:18:30.875: INFO: Got endpoints: latency-svc-945qc [1.055158069s] May 16 13:18:30.905: INFO: Created: latency-svc-dx786 May 16 13:18:30.917: INFO: Got endpoints: latency-svc-dx786 [1.047619361s] May 16 13:18:31.000: INFO: Created: latency-svc-wlstz May 16 13:18:31.021: INFO: Got endpoints: latency-svc-wlstz [1.114890842s] May 16 13:18:31.051: INFO: Created: latency-svc-8kl84 May 16 13:18:31.068: INFO: Got endpoints: latency-svc-8kl84 [1.119926337s] May 16 13:18:31.162: INFO: Created: latency-svc-kjcb9 May 16 13:18:31.200: INFO: Got endpoints: latency-svc-kjcb9 [1.044980889s] May 16 13:18:31.225: INFO: Created: latency-svc-sv66n May 16 13:18:31.253: INFO: Got endpoints: latency-svc-sv66n [1.058226471s] May 16 13:18:31.301: INFO: Created: latency-svc-mjx2s May 16 13:18:31.315: INFO: Got endpoints: latency-svc-mjx2s [1.004785577s] May 16 13:18:31.336: INFO: Created: latency-svc-cgzgr May 16 13:18:31.357: INFO: Got endpoints: latency-svc-cgzgr [1.023825478s] May 16 13:18:31.437: INFO: Created: latency-svc-8j88n May 16 13:18:31.459: INFO: Got endpoints: latency-svc-8j88n [1.059932407s] May 16 13:18:31.459: INFO: Created: latency-svc-lqbzk May 16 13:18:31.526: INFO: Created: latency-svc-zcrb5 May 16 13:18:31.526: INFO: Got endpoints: latency-svc-lqbzk [1.042215697s] May 16 13:18:31.531: INFO: Got endpoints: latency-svc-zcrb5 [1.005558246s] May 16 13:18:31.588: INFO: Created: latency-svc-qbct7 May 16 13:18:31.604: INFO: Got endpoints: latency-svc-qbct7 [994.182641ms] May 16 13:18:31.633: INFO: Created: latency-svc-vdbcp May 16 13:18:31.646: INFO: Got endpoints: latency-svc-vdbcp [922.07845ms] May 16 13:18:31.670: INFO: Created: latency-svc-bvnkh May 16 13:18:31.743: INFO: Got endpoints: latency-svc-bvnkh [982.22043ms] May 16 13:18:31.768: INFO: Created: latency-svc-4vfv9 May 16 13:18:31.779: INFO: Got endpoints: latency-svc-4vfv9 [963.875856ms] May 16 13:18:31.779: INFO: Latencies: [75.140912ms 106.512493ms 156.634994ms 193.028403ms 241.125855ms 302.559241ms 335.689937ms 365.939179ms 447.577485ms 483.2171ms 581.689603ms 613.277907ms 639.863719ms 723.058824ms 731.192563ms 734.463523ms 738.299134ms 741.278894ms 742.287627ms 742.611967ms 746.3427ms 746.961332ms 750.484682ms 751.106238ms 752.076058ms 752.250488ms 753.095968ms 753.479118ms 753.588304ms 756.754358ms 759.987693ms 764.320651ms 764.94886ms 765.246599ms 765.68572ms 769.044756ms 770.483343ms 774.59636ms 776.689456ms 777.827105ms 778.058753ms 778.280287ms 779.76376ms 780.583397ms 782.550182ms 782.913168ms 782.999277ms 783.014947ms 783.497152ms 783.60456ms 788.170992ms 791.609093ms 793.272658ms 794.941104ms 795.916908ms 796.221943ms 799.905939ms 802.198572ms 802.969879ms 804.676651ms 805.300088ms 807.577802ms 808.145071ms 808.405852ms 812.148385ms 813.565661ms 813.999004ms 814.424185ms 816.093215ms 819.235282ms 821.185354ms 824.733173ms 825.259257ms 826.065586ms 826.151488ms 826.387598ms 830.437452ms 831.303831ms 831.69147ms 832.050969ms 833.095849ms 834.045823ms 835.744729ms 837.281619ms 837.624894ms 840.560971ms 841.50352ms 843.924924ms 847.083584ms 848.617106ms 849.043142ms 850.72194ms 851.509043ms 852.016571ms 859.245828ms 861.806093ms 862.63734ms 864.276647ms 864.447701ms 868.623715ms 869.567665ms 871.606348ms 873.816814ms 873.82756ms 874.841796ms 874.851894ms 875.462202ms 877.124633ms 877.134659ms 877.163177ms 877.616239ms 878.318646ms 878.516599ms 878.961473ms 879.820095ms 879.87231ms 881.975519ms 885.385077ms 885.56291ms 886.219038ms 891.66345ms 892.539804ms 892.929207ms 894.965664ms 900.726182ms 901.007211ms 901.404501ms 903.014713ms 903.787989ms 904.052213ms 904.07122ms 906.297829ms 907.0765ms 907.944325ms 908.482951ms 912.782661ms 913.78456ms 913.85896ms 913.896139ms 916.591266ms 917.606987ms 917.690008ms 918.016865ms 919.523059ms 921.535773ms 921.95724ms 922.07845ms 922.271459ms 922.796461ms 927.882934ms 929.567875ms 930.605789ms 940.316886ms 941.078303ms 942.005978ms 946.502371ms 947.641627ms 949.241145ms 957.480269ms 963.700715ms 963.875856ms 964.696461ms 965.224409ms 971.562911ms 973.68624ms 980.397587ms 982.22043ms 982.973707ms 994.182641ms 1.004785577s 1.005558246s 1.016322023s 1.01816835s 1.023825478s 1.035797047s 1.042215697s 1.044980889s 1.047619361s 1.055158069s 1.058226471s 1.059932407s 1.076317876s 1.090128712s 1.090528239s 1.112826598s 1.114890842s 1.119926337s 1.121950362s 1.125347158s 1.131593554s 1.143245735s 1.143305183s 1.149537753s 1.161395087s 1.167073736s 1.178637418s 1.179756923s 1.185212027s 1.194866013s 1.217298629s] May 16 13:18:31.779: INFO: 50 %ile: 869.567665ms May 16 13:18:31.779: INFO: 90 %ile: 1.059932407s May 16 13:18:31.779: INFO: 99 %ile: 1.194866013s May 16 13:18:31.779: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:18:31.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6181" for this suite. May 16 13:18:55.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:18:55.898: INFO: namespace svc-latency-6181 deletion completed in 24.112577361s • [SLOW TEST:40.309 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:18:55.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 16 13:19:06.050: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1566 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 13:19:06.050: INFO: >>> kubeConfig: /root/.kube/config I0516 13:19:06.082832 6 log.go:172] (0xc002ffc580) (0xc002d681e0) Create stream I0516 13:19:06.082868 6 log.go:172] (0xc002ffc580) (0xc002d681e0) Stream added, broadcasting: 1 I0516 13:19:06.084698 6 log.go:172] (0xc002ffc580) Reply frame received for 1 I0516 13:19:06.084762 6 log.go:172] (0xc002ffc580) (0xc002d68280) Create stream I0516 13:19:06.084779 6 log.go:172] (0xc002ffc580) (0xc002d68280) Stream added, broadcasting: 3 I0516 13:19:06.086011 6 log.go:172] (0xc002ffc580) Reply frame received for 3 I0516 13:19:06.086059 6 log.go:172] (0xc002ffc580) (0xc002d68320) Create stream I0516 13:19:06.086072 6 log.go:172] (0xc002ffc580) (0xc002d68320) Stream added, broadcasting: 5 I0516 13:19:06.087065 6 log.go:172] (0xc002ffc580) Reply frame received for 5 I0516 13:19:06.172260 6 log.go:172] (0xc002ffc580) Data frame received for 5 I0516 13:19:06.172289 6 log.go:172] (0xc002d68320) (5) Data frame handling I0516 13:19:06.172322 6 log.go:172] (0xc002ffc580) Data frame received for 3 I0516 13:19:06.172333 6 log.go:172] (0xc002d68280) (3) Data frame handling I0516 13:19:06.172347 6 log.go:172] (0xc002d68280) (3) Data frame sent I0516 13:19:06.172357 6 log.go:172] (0xc002ffc580) Data frame received for 3 I0516 13:19:06.172370 6 log.go:172] (0xc002d68280) (3) Data frame handling I0516 13:19:06.174098 6 log.go:172] (0xc002ffc580) Data frame received for 1 I0516 13:19:06.174133 6 log.go:172] (0xc002d681e0) (1) Data frame handling I0516 13:19:06.174165 6 log.go:172] (0xc002d681e0) (1) Data frame sent I0516 13:19:06.174189 6 log.go:172] (0xc002ffc580) (0xc002d681e0) Stream removed, broadcasting: 1 I0516 13:19:06.174217 6 log.go:172] (0xc002ffc580) Go away received I0516 13:19:06.174309 6 log.go:172] (0xc002ffc580) (0xc002d681e0) Stream removed, broadcasting: 1 I0516 13:19:06.174330 6 log.go:172] (0xc002ffc580) (0xc002d68280) Stream removed, broadcasting: 3 I0516 13:19:06.174340 6 log.go:172] (0xc002ffc580) (0xc002d68320) Stream removed, broadcasting: 5 May 16 13:19:06.174: INFO: Exec stderr: "" May 16 13:19:06.174: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1566 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 13:19:06.174: INFO: >>> kubeConfig: /root/.kube/config I0516 13:19:06.207715 6 log.go:172] (0xc000b1aa50) (0xc00134cb40) Create stream I0516 13:19:06.207747 6 log.go:172] (0xc000b1aa50) (0xc00134cb40) Stream added, broadcasting: 1 I0516 13:19:06.210980 6 log.go:172] (0xc000b1aa50) Reply frame received for 1 I0516 13:19:06.211046 6 log.go:172] (0xc000b1aa50) (0xc002d683c0) Create stream I0516 13:19:06.211071 6 log.go:172] (0xc000b1aa50) (0xc002d683c0) Stream added, broadcasting: 3 I0516 13:19:06.211964 6 log.go:172] (0xc000b1aa50) Reply frame received for 3 I0516 13:19:06.211991 6 log.go:172] (0xc000b1aa50) (0xc00134cbe0) Create stream I0516 13:19:06.212003 6 log.go:172] (0xc000b1aa50) (0xc00134cbe0) Stream added, broadcasting: 5 I0516 13:19:06.212838 6 log.go:172] (0xc000b1aa50) Reply frame received for 5 I0516 13:19:06.281447 6 log.go:172] (0xc000b1aa50) Data frame received for 3 I0516 13:19:06.281472 6 log.go:172] (0xc002d683c0) (3) Data frame handling I0516 13:19:06.281485 6 log.go:172] (0xc002d683c0) (3) Data frame sent I0516 13:19:06.281492 6 log.go:172] (0xc000b1aa50) Data frame received for 3 I0516 13:19:06.281496 6 log.go:172] (0xc002d683c0) (3) Data frame handling I0516 13:19:06.281811 6 log.go:172] (0xc000b1aa50) Data frame received for 5 I0516 13:19:06.281822 6 log.go:172] (0xc00134cbe0) (5) Data frame handling I0516 13:19:06.283214 6 log.go:172] (0xc000b1aa50) Data frame received for 1 I0516 13:19:06.283232 6 log.go:172] (0xc00134cb40) (1) Data frame handling I0516 13:19:06.283254 6 log.go:172] (0xc00134cb40) (1) Data frame sent I0516 13:19:06.283269 6 log.go:172] (0xc000b1aa50) (0xc00134cb40) Stream removed, broadcasting: 1 I0516 13:19:06.283353 6 log.go:172] (0xc000b1aa50) (0xc00134cb40) Stream removed, broadcasting: 1 I0516 13:19:06.283366 6 log.go:172] (0xc000b1aa50) (0xc002d683c0) Stream removed, broadcasting: 3 I0516 13:19:06.283417 6 log.go:172] (0xc000b1aa50) Go away received I0516 13:19:06.283555 6 log.go:172] (0xc000b1aa50) (0xc00134cbe0) Stream removed, broadcasting: 5 May 16 13:19:06.283: INFO: Exec stderr: "" May 16 13:19:06.283: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1566 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 13:19:06.283: INFO: >>> kubeConfig: /root/.kube/config I0516 13:19:06.347154 6 log.go:172] (0xc002002d10) (0xc0016e5040) Create stream I0516 13:19:06.347189 6 log.go:172] (0xc002002d10) (0xc0016e5040) Stream added, broadcasting: 1 I0516 13:19:06.349765 6 log.go:172] (0xc002002d10) Reply frame received for 1 I0516 13:19:06.349800 6 log.go:172] (0xc002002d10) (0xc002d68460) Create stream I0516 13:19:06.349811 6 log.go:172] (0xc002002d10) (0xc002d68460) Stream added, broadcasting: 3 I0516 13:19:06.350739 6 log.go:172] (0xc002002d10) Reply frame received for 3 I0516 13:19:06.350769 6 log.go:172] (0xc002002d10) (0xc002d68500) Create stream I0516 13:19:06.350778 6 log.go:172] (0xc002002d10) (0xc002d68500) Stream added, broadcasting: 5 I0516 13:19:06.351655 6 log.go:172] (0xc002002d10) Reply frame received for 5 I0516 13:19:06.421259 6 log.go:172] (0xc002002d10) Data frame received for 3 I0516 13:19:06.421297 6 log.go:172] (0xc002d68460) (3) Data frame handling I0516 13:19:06.421306 6 log.go:172] (0xc002d68460) (3) Data frame sent I0516 13:19:06.421320 6 log.go:172] (0xc002002d10) Data frame received for 3 I0516 13:19:06.421328 6 log.go:172] (0xc002d68460) (3) Data frame handling I0516 13:19:06.421346 6 log.go:172] (0xc002002d10) Data frame received for 5 I0516 13:19:06.421357 6 log.go:172] (0xc002d68500) (5) Data frame handling I0516 13:19:06.422947 6 log.go:172] (0xc002002d10) Data frame received for 1 I0516 13:19:06.422985 6 log.go:172] (0xc0016e5040) (1) Data frame handling I0516 13:19:06.423022 6 log.go:172] (0xc0016e5040) (1) Data frame sent I0516 13:19:06.423090 6 log.go:172] (0xc002002d10) (0xc0016e5040) Stream removed, broadcasting: 1 I0516 13:19:06.423173 6 log.go:172] (0xc002002d10) Go away received I0516 13:19:06.423295 6 log.go:172] (0xc002002d10) (0xc0016e5040) Stream removed, broadcasting: 1 I0516 13:19:06.423325 6 log.go:172] (0xc002002d10) (0xc002d68460) Stream removed, broadcasting: 3 I0516 13:19:06.423352 6 log.go:172] (0xc002002d10) (0xc002d68500) Stream removed, broadcasting: 5 May 16 13:19:06.423: INFO: Exec stderr: "" May 16 13:19:06.423: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1566 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 13:19:06.423: INFO: >>> kubeConfig: /root/.kube/config I0516 13:19:06.455675 6 log.go:172] (0xc002ffde40) (0xc002d68820) Create stream I0516 13:19:06.455718 6 log.go:172] (0xc002ffde40) (0xc002d68820) Stream added, broadcasting: 1 I0516 13:19:06.459080 6 log.go:172] (0xc002ffde40) Reply frame received for 1 I0516 13:19:06.459143 6 log.go:172] (0xc002ffde40) (0xc00195ebe0) Create stream I0516 13:19:06.459164 6 log.go:172] (0xc002ffde40) (0xc00195ebe0) Stream added, broadcasting: 3 I0516 13:19:06.460162 6 log.go:172] (0xc002ffde40) Reply frame received for 3 I0516 13:19:06.460206 6 log.go:172] (0xc002ffde40) (0xc0009de0a0) Create stream I0516 13:19:06.460220 6 log.go:172] (0xc002ffde40) (0xc0009de0a0) Stream added, broadcasting: 5 I0516 13:19:06.461284 6 log.go:172] (0xc002ffde40) Reply frame received for 5 I0516 13:19:06.520642 6 log.go:172] (0xc002ffde40) Data frame received for 5 I0516 13:19:06.520722 6 log.go:172] (0xc0009de0a0) (5) Data frame handling I0516 13:19:06.520814 6 log.go:172] (0xc002ffde40) Data frame received for 3 I0516 13:19:06.520843 6 log.go:172] (0xc00195ebe0) (3) Data frame handling I0516 13:19:06.520867 6 log.go:172] (0xc00195ebe0) (3) Data frame sent I0516 13:19:06.520888 6 log.go:172] (0xc002ffde40) Data frame received for 3 I0516 13:19:06.520905 6 log.go:172] (0xc00195ebe0) (3) Data frame handling I0516 13:19:06.523031 6 log.go:172] (0xc002ffde40) Data frame received for 1 I0516 13:19:06.523116 6 log.go:172] (0xc002d68820) (1) Data frame handling I0516 13:19:06.523183 6 log.go:172] (0xc002d68820) (1) Data frame sent I0516 13:19:06.523227 6 log.go:172] (0xc002ffde40) (0xc002d68820) Stream removed, broadcasting: 1 I0516 13:19:06.523292 6 log.go:172] (0xc002ffde40) Go away received I0516 13:19:06.523423 6 log.go:172] (0xc002ffde40) (0xc002d68820) Stream removed, broadcasting: 1 I0516 13:19:06.523450 6 log.go:172] (0xc002ffde40) (0xc00195ebe0) Stream removed, broadcasting: 3 I0516 13:19:06.523462 6 log.go:172] (0xc002ffde40) (0xc0009de0a0) Stream removed, broadcasting: 5 May 16 13:19:06.523: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 16 13:19:06.523: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1566 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 13:19:06.523: INFO: >>> kubeConfig: /root/.kube/config I0516 13:19:06.569370 6 log.go:172] (0xc001c2cf20) (0xc00195f040) Create stream I0516 13:19:06.569405 6 log.go:172] (0xc001c2cf20) (0xc00195f040) Stream added, broadcasting: 1 I0516 13:19:06.571997 6 log.go:172] (0xc001c2cf20) Reply frame received for 1 I0516 13:19:06.572045 6 log.go:172] (0xc001c2cf20) (0xc00195f0e0) Create stream I0516 13:19:06.572062 6 log.go:172] (0xc001c2cf20) (0xc00195f0e0) Stream added, broadcasting: 3 I0516 13:19:06.573032 6 log.go:172] (0xc001c2cf20) Reply frame received for 3 I0516 13:19:06.573068 6 log.go:172] (0xc001c2cf20) (0xc0016e50e0) Create stream I0516 13:19:06.573077 6 log.go:172] (0xc001c2cf20) (0xc0016e50e0) Stream added, broadcasting: 5 I0516 13:19:06.574112 6 log.go:172] (0xc001c2cf20) Reply frame received for 5 I0516 13:19:06.633511 6 log.go:172] (0xc001c2cf20) Data frame received for 3 I0516 13:19:06.633551 6 log.go:172] (0xc00195f0e0) (3) Data frame handling I0516 13:19:06.633569 6 log.go:172] (0xc00195f0e0) (3) Data frame sent I0516 13:19:06.633587 6 log.go:172] (0xc001c2cf20) Data frame received for 3 I0516 13:19:06.633602 6 log.go:172] (0xc00195f0e0) (3) Data frame handling I0516 13:19:06.633759 6 log.go:172] (0xc001c2cf20) Data frame received for 5 I0516 13:19:06.633805 6 log.go:172] (0xc0016e50e0) (5) Data frame handling I0516 13:19:06.635427 6 log.go:172] (0xc001c2cf20) Data frame received for 1 I0516 13:19:06.635464 6 log.go:172] (0xc00195f040) (1) Data frame handling I0516 13:19:06.635481 6 log.go:172] (0xc00195f040) (1) Data frame sent I0516 13:19:06.635494 6 log.go:172] (0xc001c2cf20) (0xc00195f040) Stream removed, broadcasting: 1 I0516 13:19:06.635510 6 log.go:172] (0xc001c2cf20) Go away received I0516 13:19:06.635656 6 log.go:172] (0xc001c2cf20) (0xc00195f040) Stream removed, broadcasting: 1 I0516 13:19:06.635678 6 log.go:172] (0xc001c2cf20) (0xc00195f0e0) Stream removed, broadcasting: 3 I0516 13:19:06.635690 6 log.go:172] (0xc001c2cf20) (0xc0016e50e0) Stream removed, broadcasting: 5 May 16 13:19:06.635: INFO: Exec stderr: "" May 16 13:19:06.635: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1566 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 13:19:06.635: INFO: >>> kubeConfig: /root/.kube/config I0516 13:19:06.667996 6 log.go:172] (0xc003216c60) (0xc002d68b40) Create stream I0516 13:19:06.668027 6 log.go:172] (0xc003216c60) (0xc002d68b40) Stream added, broadcasting: 1 I0516 13:19:06.670544 6 log.go:172] (0xc003216c60) Reply frame received for 1 I0516 13:19:06.670609 6 log.go:172] (0xc003216c60) (0xc00134cc80) Create stream I0516 13:19:06.670625 6 log.go:172] (0xc003216c60) (0xc00134cc80) Stream added, broadcasting: 3 I0516 13:19:06.671817 6 log.go:172] (0xc003216c60) Reply frame received for 3 I0516 13:19:06.671855 6 log.go:172] (0xc003216c60) (0xc00134cd20) Create stream I0516 13:19:06.671869 6 log.go:172] (0xc003216c60) (0xc00134cd20) Stream added, broadcasting: 5 I0516 13:19:06.672859 6 log.go:172] (0xc003216c60) Reply frame received for 5 I0516 13:19:06.723330 6 log.go:172] (0xc003216c60) Data frame received for 5 I0516 13:19:06.723374 6 log.go:172] (0xc00134cd20) (5) Data frame handling I0516 13:19:06.723408 6 log.go:172] (0xc003216c60) Data frame received for 3 I0516 13:19:06.723428 6 log.go:172] (0xc00134cc80) (3) Data frame handling I0516 13:19:06.723455 6 log.go:172] (0xc00134cc80) (3) Data frame sent I0516 13:19:06.723473 6 log.go:172] (0xc003216c60) Data frame received for 3 I0516 13:19:06.723490 6 log.go:172] (0xc00134cc80) (3) Data frame handling I0516 13:19:06.724663 6 log.go:172] (0xc003216c60) Data frame received for 1 I0516 13:19:06.724687 6 log.go:172] (0xc002d68b40) (1) Data frame handling I0516 13:19:06.724713 6 log.go:172] (0xc002d68b40) (1) Data frame sent I0516 13:19:06.724740 6 log.go:172] (0xc003216c60) (0xc002d68b40) Stream removed, broadcasting: 1 I0516 13:19:06.724762 6 log.go:172] (0xc003216c60) Go away received I0516 13:19:06.724895 6 log.go:172] (0xc003216c60) (0xc002d68b40) Stream removed, broadcasting: 1 I0516 13:19:06.724930 6 log.go:172] (0xc003216c60) (0xc00134cc80) Stream removed, broadcasting: 3 I0516 13:19:06.724942 6 log.go:172] (0xc003216c60) (0xc00134cd20) Stream removed, broadcasting: 5 May 16 13:19:06.724: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 16 13:19:06.725: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1566 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 13:19:06.725: INFO: >>> kubeConfig: /root/.kube/config I0516 13:19:06.752846 6 log.go:172] (0xc003217550) (0xc002d68e60) Create stream I0516 13:19:06.752881 6 log.go:172] (0xc003217550) (0xc002d68e60) Stream added, broadcasting: 1 I0516 13:19:06.755463 6 log.go:172] (0xc003217550) Reply frame received for 1 I0516 13:19:06.755513 6 log.go:172] (0xc003217550) (0xc00134cdc0) Create stream I0516 13:19:06.755536 6 log.go:172] (0xc003217550) (0xc00134cdc0) Stream added, broadcasting: 3 I0516 13:19:06.756617 6 log.go:172] (0xc003217550) Reply frame received for 3 I0516 13:19:06.756652 6 log.go:172] (0xc003217550) (0xc00134ce60) Create stream I0516 13:19:06.756663 6 log.go:172] (0xc003217550) (0xc00134ce60) Stream added, broadcasting: 5 I0516 13:19:06.757828 6 log.go:172] (0xc003217550) Reply frame received for 5 I0516 13:19:06.798864 6 log.go:172] (0xc003217550) Data frame received for 3 I0516 13:19:06.798910 6 log.go:172] (0xc00134cdc0) (3) Data frame handling I0516 13:19:06.798932 6 log.go:172] (0xc00134cdc0) (3) Data frame sent I0516 13:19:06.799064 6 log.go:172] (0xc003217550) Data frame received for 3 I0516 13:19:06.799110 6 log.go:172] (0xc00134cdc0) (3) Data frame handling I0516 13:19:06.799150 6 log.go:172] (0xc003217550) Data frame received for 5 I0516 13:19:06.799177 6 log.go:172] (0xc00134ce60) (5) Data frame handling I0516 13:19:06.800749 6 log.go:172] (0xc003217550) Data frame received for 1 I0516 13:19:06.800771 6 log.go:172] (0xc002d68e60) (1) Data frame handling I0516 13:19:06.800784 6 log.go:172] (0xc002d68e60) (1) Data frame sent I0516 13:19:06.800806 6 log.go:172] (0xc003217550) (0xc002d68e60) Stream removed, broadcasting: 1 I0516 13:19:06.800889 6 log.go:172] (0xc003217550) (0xc002d68e60) Stream removed, broadcasting: 1 I0516 13:19:06.800903 6 log.go:172] (0xc003217550) (0xc00134cdc0) Stream removed, broadcasting: 3 I0516 13:19:06.800918 6 log.go:172] (0xc003217550) (0xc00134ce60) Stream removed, broadcasting: 5 May 16 13:19:06.800: INFO: Exec stderr: "" May 16 13:19:06.800: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1566 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 13:19:06.800: INFO: >>> kubeConfig: /root/.kube/config I0516 13:19:06.803508 6 log.go:172] (0xc003217550) Go away received I0516 13:19:06.839586 6 log.go:172] (0xc002090210) (0xc00134d0e0) Create stream I0516 13:19:06.839617 6 log.go:172] (0xc002090210) (0xc00134d0e0) Stream added, broadcasting: 1 I0516 13:19:06.843210 6 log.go:172] (0xc002090210) Reply frame received for 1 I0516 13:19:06.843276 6 log.go:172] (0xc002090210) (0xc0009de140) Create stream I0516 13:19:06.843294 6 log.go:172] (0xc002090210) (0xc0009de140) Stream added, broadcasting: 3 I0516 13:19:06.844439 6 log.go:172] (0xc002090210) Reply frame received for 3 I0516 13:19:06.844471 6 log.go:172] (0xc002090210) (0xc00134d180) Create stream I0516 13:19:06.844489 6 log.go:172] (0xc002090210) (0xc00134d180) Stream added, broadcasting: 5 I0516 13:19:06.845712 6 log.go:172] (0xc002090210) Reply frame received for 5 I0516 13:19:06.915910 6 log.go:172] (0xc002090210) Data frame received for 5 I0516 13:19:06.915939 6 log.go:172] (0xc00134d180) (5) Data frame handling I0516 13:19:06.916010 6 log.go:172] (0xc002090210) Data frame received for 3 I0516 13:19:06.916069 6 log.go:172] (0xc0009de140) (3) Data frame handling I0516 13:19:06.916095 6 log.go:172] (0xc0009de140) (3) Data frame sent I0516 13:19:06.916115 6 log.go:172] (0xc002090210) Data frame received for 3 I0516 13:19:06.916124 6 log.go:172] (0xc0009de140) (3) Data frame handling I0516 13:19:06.917805 6 log.go:172] (0xc002090210) Data frame received for 1 I0516 13:19:06.917819 6 log.go:172] (0xc00134d0e0) (1) Data frame handling I0516 13:19:06.917830 6 log.go:172] (0xc00134d0e0) (1) Data frame sent I0516 13:19:06.917840 6 log.go:172] (0xc002090210) (0xc00134d0e0) Stream removed, broadcasting: 1 I0516 13:19:06.917850 6 log.go:172] (0xc002090210) Go away received I0516 13:19:06.918022 6 log.go:172] (0xc002090210) (0xc00134d0e0) Stream removed, broadcasting: 1 I0516 13:19:06.918051 6 log.go:172] (0xc002090210) (0xc0009de140) Stream removed, broadcasting: 3 I0516 13:19:06.918068 6 log.go:172] (0xc002090210) (0xc00134d180) Stream removed, broadcasting: 5 May 16 13:19:06.918: INFO: Exec stderr: "" May 16 13:19:06.918: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1566 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 13:19:06.918: INFO: >>> kubeConfig: /root/.kube/config I0516 13:19:06.945580 6 log.go:172] (0xc002090fd0) (0xc00134d540) Create stream I0516 13:19:06.945611 6 log.go:172] (0xc002090fd0) (0xc00134d540) Stream added, broadcasting: 1 I0516 13:19:06.963388 6 log.go:172] (0xc002090fd0) Reply frame received for 1 I0516 13:19:06.963456 6 log.go:172] (0xc002090fd0) (0xc0025c2000) Create stream I0516 13:19:06.963472 6 log.go:172] (0xc002090fd0) (0xc0025c2000) Stream added, broadcasting: 3 I0516 13:19:06.964655 6 log.go:172] (0xc002090fd0) Reply frame received for 3 I0516 13:19:06.965577 6 log.go:172] (0xc002090fd0) (0xc000a6c000) Create stream I0516 13:19:06.965606 6 log.go:172] (0xc002090fd0) (0xc000a6c000) Stream added, broadcasting: 5 I0516 13:19:06.966581 6 log.go:172] (0xc002090fd0) Reply frame received for 5 I0516 13:19:07.018844 6 log.go:172] (0xc002090fd0) Data frame received for 5 I0516 13:19:07.018870 6 log.go:172] (0xc000a6c000) (5) Data frame handling I0516 13:19:07.018906 6 log.go:172] (0xc002090fd0) Data frame received for 3 I0516 13:19:07.018938 6 log.go:172] (0xc0025c2000) (3) Data frame handling I0516 13:19:07.018959 6 log.go:172] (0xc0025c2000) (3) Data frame sent I0516 13:19:07.018971 6 log.go:172] (0xc002090fd0) Data frame received for 3 I0516 13:19:07.018982 6 log.go:172] (0xc0025c2000) (3) Data frame handling I0516 13:19:07.020371 6 log.go:172] (0xc002090fd0) Data frame received for 1 I0516 13:19:07.020414 6 log.go:172] (0xc00134d540) (1) Data frame handling I0516 13:19:07.020442 6 log.go:172] (0xc00134d540) (1) Data frame sent I0516 13:19:07.020466 6 log.go:172] (0xc002090fd0) (0xc00134d540) Stream removed, broadcasting: 1 I0516 13:19:07.020494 6 log.go:172] (0xc002090fd0) Go away received I0516 13:19:07.020638 6 log.go:172] (0xc002090fd0) (0xc00134d540) Stream removed, broadcasting: 1 I0516 13:19:07.020669 6 log.go:172] (0xc002090fd0) (0xc0025c2000) Stream removed, broadcasting: 3 I0516 13:19:07.020686 6 log.go:172] (0xc002090fd0) (0xc000a6c000) Stream removed, broadcasting: 5 May 16 13:19:07.020: INFO: Exec stderr: "" May 16 13:19:07.020: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1566 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 13:19:07.020: INFO: >>> kubeConfig: /root/.kube/config I0516 13:19:07.053738 6 log.go:172] (0xc000da6420) (0xc000a6c6e0) Create stream I0516 13:19:07.053773 6 log.go:172] (0xc000da6420) (0xc000a6c6e0) Stream added, broadcasting: 1 I0516 13:19:07.055495 6 log.go:172] (0xc000da6420) Reply frame received for 1 I0516 13:19:07.055534 6 log.go:172] (0xc000da6420) (0xc000a6c960) Create stream I0516 13:19:07.055545 6 log.go:172] (0xc000da6420) (0xc000a6c960) Stream added, broadcasting: 3 I0516 13:19:07.056578 6 log.go:172] (0xc000da6420) Reply frame received for 3 I0516 13:19:07.056626 6 log.go:172] (0xc000da6420) (0xc00039c000) Create stream I0516 13:19:07.056644 6 log.go:172] (0xc000da6420) (0xc00039c000) Stream added, broadcasting: 5 I0516 13:19:07.057735 6 log.go:172] (0xc000da6420) Reply frame received for 5 I0516 13:19:07.134934 6 log.go:172] (0xc000da6420) Data frame received for 5 I0516 13:19:07.134976 6 log.go:172] (0xc00039c000) (5) Data frame handling I0516 13:19:07.135004 6 log.go:172] (0xc000da6420) Data frame received for 3 I0516 13:19:07.135035 6 log.go:172] (0xc000a6c960) (3) Data frame handling I0516 13:19:07.135077 6 log.go:172] (0xc000a6c960) (3) Data frame sent I0516 13:19:07.135100 6 log.go:172] (0xc000da6420) Data frame received for 3 I0516 13:19:07.135121 6 log.go:172] (0xc000a6c960) (3) Data frame handling I0516 13:19:07.136711 6 log.go:172] (0xc000da6420) Data frame received for 1 I0516 13:19:07.136751 6 log.go:172] (0xc000a6c6e0) (1) Data frame handling I0516 13:19:07.136782 6 log.go:172] (0xc000a6c6e0) (1) Data frame sent I0516 13:19:07.136824 6 log.go:172] (0xc000da6420) (0xc000a6c6e0) Stream removed, broadcasting: 1 I0516 13:19:07.136868 6 log.go:172] (0xc000da6420) Go away received I0516 13:19:07.136958 6 log.go:172] (0xc000da6420) (0xc000a6c6e0) Stream removed, broadcasting: 1 I0516 13:19:07.136998 6 log.go:172] (0xc000da6420) (0xc000a6c960) Stream removed, broadcasting: 3 I0516 13:19:07.137012 6 log.go:172] (0xc000da6420) (0xc00039c000) Stream removed, broadcasting: 5 May 16 13:19:07.137: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:19:07.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-1566" for this suite. May 16 13:19:57.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:19:57.233: INFO: namespace e2e-kubelet-etc-hosts-1566 deletion completed in 50.091718606s • [SLOW TEST:61.335 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:19:57.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token May 16 13:19:57.821: INFO: created pod pod-service-account-defaultsa May 16 13:19:57.821: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 16 13:19:57.831: INFO: created pod pod-service-account-mountsa May 16 13:19:57.831: INFO: pod pod-service-account-mountsa service account token volume mount: true May 16 13:19:57.859: INFO: created pod pod-service-account-nomountsa May 16 13:19:57.859: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 16 13:19:57.873: INFO: created pod pod-service-account-defaultsa-mountspec May 16 13:19:57.873: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 16 13:19:57.930: INFO: created pod pod-service-account-mountsa-mountspec May 16 13:19:57.930: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 16 13:19:57.972: INFO: created pod pod-service-account-nomountsa-mountspec May 16 13:19:57.972: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 16 13:19:58.005: INFO: created pod pod-service-account-defaultsa-nomountspec May 16 13:19:58.005: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 16 13:19:58.079: INFO: created pod pod-service-account-mountsa-nomountspec May 16 13:19:58.079: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 16 13:19:58.092: INFO: created pod pod-service-account-nomountsa-nomountspec May 16 13:19:58.092: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:19:58.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2908" for this suite. May 16 13:20:28.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:20:28.448: INFO: namespace svcaccounts-2908 deletion completed in 30.191764666s • [SLOW TEST:31.214 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:20:28.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 13:20:28.614: INFO: Creating deployment "nginx-deployment" May 16 13:20:28.619: INFO: Waiting for observed generation 1 May 16 13:20:30.720: INFO: Waiting for all required pods to come up May 16 13:20:30.724: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 16 13:20:40.736: INFO: Waiting for deployment "nginx-deployment" to complete May 16 13:20:40.740: INFO: Updating deployment "nginx-deployment" with a non-existent image May 16 13:20:40.746: INFO: Updating deployment nginx-deployment May 16 13:20:40.746: INFO: Waiting for observed generation 2 May 16 13:20:42.758: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 16 13:20:42.762: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 16 13:20:42.764: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 16 13:20:42.773: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 16 13:20:42.773: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 16 13:20:42.775: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 16 13:20:42.779: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 16 13:20:42.779: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 16 13:20:42.784: INFO: Updating deployment nginx-deployment May 16 13:20:42.784: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 16 13:20:42.840: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 16 13:20:42.962: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 16 13:20:45.980: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-7497,SelfLink:/apis/apps/v1/namespaces/deployment-7497/deployments/nginx-deployment,UID:86191b5b-94f7-414f-b197-1b31929e703b,ResourceVersion:11215978,Generation:3,CreationTimestamp:2020-05-16 13:20:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-05-16 13:20:42 +0000 UTC 2020-05-16 13:20:42 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-16 13:20:43 +0000 UTC 2020-05-16 13:20:28 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} May 16 13:20:46.336: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-7497,SelfLink:/apis/apps/v1/namespaces/deployment-7497/replicasets/nginx-deployment-55fb7cb77f,UID:d4457664-676d-41e4-bb92-0ef7fcf5b7df,ResourceVersion:11215972,Generation:3,CreationTimestamp:2020-05-16 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 86191b5b-94f7-414f-b197-1b31929e703b 0xc002e7f207 0xc002e7f208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 16 13:20:46.336: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 16 13:20:46.338: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-7497,SelfLink:/apis/apps/v1/namespaces/deployment-7497/replicasets/nginx-deployment-7b8c6f4498,UID:c56659c9-3e0e-4c92-95c4-00d3ba96f1d0,ResourceVersion:11215955,Generation:3,CreationTimestamp:2020-05-16 13:20:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 86191b5b-94f7-414f-b197-1b31929e703b 0xc002e7f2e7 0xc002e7f2e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 16 13:20:46.428: INFO: Pod "nginx-deployment-55fb7cb77f-4pgf6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4pgf6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-55fb7cb77f-4pgf6,UID:7747a1f7-2901-40ab-b90a-91b1a5048791,ResourceVersion:11215895,Generation:0,CreationTimestamp:2020-05-16 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d4457664-676d-41e4-bb92-0ef7fcf5b7df 0xc002e7fc97 0xc002e7fc98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e7fd10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e7fd30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-16 13:20:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.428: INFO: Pod "nginx-deployment-55fb7cb77f-8sr5b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8sr5b,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-55fb7cb77f-8sr5b,UID:4b41121c-18ea-40a0-b933-d6d85976c8b3,ResourceVersion:11216027,Generation:0,CreationTimestamp:2020-05-16 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d4457664-676d-41e4-bb92-0ef7fcf5b7df 0xc002e7fe17 0xc002e7fe18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e7fe90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e7feb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.142,StartTime:2020-05-16 13:20:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.428: INFO: Pod "nginx-deployment-55fb7cb77f-g2lw8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-g2lw8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-55fb7cb77f-g2lw8,UID:6edd6554-92c0-4c1c-b700-d4825eed633b,ResourceVersion:11216037,Generation:0,CreationTimestamp:2020-05-16 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d4457664-676d-41e4-bb92-0ef7fcf5b7df 0xc002e7ffa7 0xc002e7ffa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7e030} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7e050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.143,StartTime:2020-05-16 13:20:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.428: INFO: Pod "nginx-deployment-55fb7cb77f-jklnr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jklnr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-55fb7cb77f-jklnr,UID:dc4a62ff-5716-4f02-87b0-11ee9d7b1e5c,ResourceVersion:11215898,Generation:0,CreationTimestamp:2020-05-16 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d4457664-676d-41e4-bb92-0ef7fcf5b7df 0xc002b7e147 0xc002b7e148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7e1c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7e1e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-16 13:20:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.428: INFO: Pod "nginx-deployment-55fb7cb77f-m8jzq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-m8jzq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-55fb7cb77f-m8jzq,UID:08bb5529-6985-4a15-b013-f3783d2c83c1,ResourceVersion:11216024,Generation:0,CreationTimestamp:2020-05-16 13:20:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d4457664-676d-41e4-bb92-0ef7fcf5b7df 0xc002b7e2b7 0xc002b7e2b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7e330} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7e350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.428: INFO: Pod "nginx-deployment-55fb7cb77f-mg4g2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mg4g2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-55fb7cb77f-mg4g2,UID:2c0b17e0-d1b2-4019-a9c3-8fff2c41a400,ResourceVersion:11216017,Generation:0,CreationTimestamp:2020-05-16 13:20:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d4457664-676d-41e4-bb92-0ef7fcf5b7df 0xc002b7e427 0xc002b7e428}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7e4a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7e4c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.429: INFO: Pod "nginx-deployment-55fb7cb77f-mvgfb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mvgfb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-55fb7cb77f-mvgfb,UID:bd8c20c6-d7d0-49d2-84aa-2fe0c38d59fd,ResourceVersion:11216006,Generation:0,CreationTimestamp:2020-05-16 13:20:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d4457664-676d-41e4-bb92-0ef7fcf5b7df 0xc002b7e597 0xc002b7e598}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7e610} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7e630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.429: INFO: Pod "nginx-deployment-55fb7cb77f-n62qh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-n62qh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-55fb7cb77f-n62qh,UID:87ed3ace-6b53-439f-b13b-48951801c94f,ResourceVersion:11215968,Generation:0,CreationTimestamp:2020-05-16 13:20:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d4457664-676d-41e4-bb92-0ef7fcf5b7df 0xc002b7e707 0xc002b7e708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7e780} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7e7b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.429: INFO: Pod "nginx-deployment-55fb7cb77f-tg67p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tg67p,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-55fb7cb77f-tg67p,UID:a97e9fe2-f3a9-46e5-bdd0-50c655c2d004,ResourceVersion:11216040,Generation:0,CreationTimestamp:2020-05-16 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d4457664-676d-41e4-bb92-0ef7fcf5b7df 0xc002b7e887 0xc002b7e888}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7e900} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7e920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.229,StartTime:2020-05-16 13:20:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.429: INFO: Pod "nginx-deployment-55fb7cb77f-v96lc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-v96lc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-55fb7cb77f-v96lc,UID:c45c5ed0-55b4-467c-ad8e-2b64171aaa24,ResourceVersion:11216023,Generation:0,CreationTimestamp:2020-05-16 13:20:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d4457664-676d-41e4-bb92-0ef7fcf5b7df 0xc002b7ea17 0xc002b7ea18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7ea90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7eab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.429: INFO: Pod "nginx-deployment-55fb7cb77f-vj7jx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vj7jx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-55fb7cb77f-vj7jx,UID:6346e1d3-ce62-4bfa-b05c-266ade86973d,ResourceVersion:11216035,Generation:0,CreationTimestamp:2020-05-16 13:20:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d4457664-676d-41e4-bb92-0ef7fcf5b7df 0xc002b7eb97 0xc002b7eb98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7ec10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7ec30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.430: INFO: Pod "nginx-deployment-55fb7cb77f-vndbs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vndbs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-55fb7cb77f-vndbs,UID:e4fec735-6e22-4f5d-b2b0-62ae90b2493a,ResourceVersion:11215984,Generation:0,CreationTimestamp:2020-05-16 13:20:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d4457664-676d-41e4-bb92-0ef7fcf5b7df 0xc002b7ed07 0xc002b7ed08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7ed80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7eda0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.430: INFO: Pod "nginx-deployment-55fb7cb77f-w9knd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-w9knd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-55fb7cb77f-w9knd,UID:0c28ba0e-e300-408e-94bd-e5f160a16244,ResourceVersion:11215975,Generation:0,CreationTimestamp:2020-05-16 13:20:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d4457664-676d-41e4-bb92-0ef7fcf5b7df 0xc002b7ee77 0xc002b7ee78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7eef0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7ef10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.430: INFO: Pod "nginx-deployment-7b8c6f4498-7bw8s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7bw8s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-7bw8s,UID:02f5954c-e87f-4207-8890-dc769d383822,ResourceVersion:11215970,Generation:0,CreationTimestamp:2020-05-16 13:20:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002b7efe7 0xc002b7efe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7f060} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7f080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.430: INFO: Pod "nginx-deployment-7b8c6f4498-7pmrk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7pmrk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-7pmrk,UID:06e6a8c6-b0b2-4015-8bd4-31fe3b76096e,ResourceVersion:11215799,Generation:0,CreationTimestamp:2020-05-16 13:20:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002b7f147 0xc002b7f148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7f1c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7f1e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.138,StartTime:2020-05-16 13:20:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-16 13:20:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://35a1f456fae6adcb3d63bc2fe9bff994c48a46d81edd21987c0a7e1ad9d9bf4b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.430: INFO: Pod "nginx-deployment-7b8c6f4498-97kmh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-97kmh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-97kmh,UID:f54b16aa-edeb-453d-b7ec-271a69c0ddfa,ResourceVersion:11215786,Generation:0,CreationTimestamp:2020-05-16 13:20:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002b7f2b7 0xc002b7f2b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7f330} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7f350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.137,StartTime:2020-05-16 13:20:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-16 13:20:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3f0e0be142ceaca4243ee6201b9f9a1f2394908540dde4a9e4d7bc5c3c3f1f62}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.430: INFO: Pod "nginx-deployment-7b8c6f4498-bj4lq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bj4lq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-bj4lq,UID:d4ae0829-af1f-4087-8236-9ef1f6c34d91,ResourceVersion:11215989,Generation:0,CreationTimestamp:2020-05-16 13:20:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002b7f427 0xc002b7f428}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7f4a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7f4c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.430: INFO: Pod "nginx-deployment-7b8c6f4498-crlfd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-crlfd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-crlfd,UID:cde733aa-1fad-4a59-b79e-0ca85ef098f3,ResourceVersion:11215834,Generation:0,CreationTimestamp:2020-05-16 13:20:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002b7f587 0xc002b7f588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7f600} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7f620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.141,StartTime:2020-05-16 13:20:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-16 13:20:39 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://86c059e95375cfb2e9075435db7d356d01720728ae6a48328383f3cbb89ee225}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.430: INFO: Pod "nginx-deployment-7b8c6f4498-cxwsn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cxwsn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-cxwsn,UID:97e719d7-d54c-4c8e-8a3f-906195f6726e,ResourceVersion:11215811,Generation:0,CreationTimestamp:2020-05-16 13:20:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002b7f707 0xc002b7f708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7f780} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7f7a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.139,StartTime:2020-05-16 13:20:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-16 13:20:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a0a02750649f6fdcfcc18636d181bb9a93a436fbd08f6cc6fc670518b53cfa98}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.430: INFO: Pod "nginx-deployment-7b8c6f4498-d6kn7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-d6kn7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-d6kn7,UID:cd707be8-9c1b-427a-97cc-b1a8be915bb0,ResourceVersion:11215999,Generation:0,CreationTimestamp:2020-05-16 13:20:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002b7f877 0xc002b7f878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7f8f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7f910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.431: INFO: Pod "nginx-deployment-7b8c6f4498-f4cmq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f4cmq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-f4cmq,UID:79e8d3a1-b99d-4e34-b90c-913c24025add,ResourceVersion:11216018,Generation:0,CreationTimestamp:2020-05-16 13:20:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002b7f9d7 0xc002b7f9d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7fa50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7fa70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.431: INFO: Pod "nginx-deployment-7b8c6f4498-gb8m2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gb8m2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-gb8m2,UID:794015be-c011-48c0-90ac-777a3343aaf6,ResourceVersion:11215809,Generation:0,CreationTimestamp:2020-05-16 13:20:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002b7fb37 0xc002b7fb38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7fbb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7fbd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.226,StartTime:2020-05-16 13:20:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-16 13:20:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d1918afcc8753ee64a537c6f618c9c2db719df50f96fc46ac7e1925a60b60d63}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.431: INFO: Pod "nginx-deployment-7b8c6f4498-ghjch" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ghjch,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-ghjch,UID:7a7a379a-a4b3-4532-a2c1-8d811b6af81a,ResourceVersion:11216004,Generation:0,CreationTimestamp:2020-05-16 13:20:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002b7fcc7 0xc002b7fcc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7fd40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7fd60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.431: INFO: Pod "nginx-deployment-7b8c6f4498-k8q9s" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-k8q9s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-k8q9s,UID:ac6d6bad-824d-4bf5-95df-c84245b71454,ResourceVersion:11215833,Generation:0,CreationTimestamp:2020-05-16 13:20:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002b7fe27 0xc002b7fe28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b7fea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b7fec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.227,StartTime:2020-05-16 13:20:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-16 13:20:39 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8f6b08054615ef214e6a627688f81310956744dd748e2af7a44ea861d4517ab2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.431: INFO: Pod "nginx-deployment-7b8c6f4498-lxzx2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lxzx2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-lxzx2,UID:55926640-dcc0-4ec9-82eb-5f8d538071d6,ResourceVersion:11215991,Generation:0,CreationTimestamp:2020-05-16 13:20:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002b7ff97 0xc002b7ff98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e86010} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e86030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.431: INFO: Pod "nginx-deployment-7b8c6f4498-m5zht" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-m5zht,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-m5zht,UID:4ded6e70-953c-4eb5-a6e2-d32313f8d64c,ResourceVersion:11215997,Generation:0,CreationTimestamp:2020-05-16 13:20:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002e860f7 0xc002e860f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e86170} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e86190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.431: INFO: Pod "nginx-deployment-7b8c6f4498-m76pz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-m76pz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-m76pz,UID:24447fcd-5140-413b-a8ce-c20d2c9fdb42,ResourceVersion:11215821,Generation:0,CreationTimestamp:2020-05-16 13:20:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002e86257 0xc002e86258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e862d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e862f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.224,StartTime:2020-05-16 13:20:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-16 13:20:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d908f941701f663bf4994ebea6f49c9e2555ec16fe6e97e6a10c0fda7a23793a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.431: INFO: Pod "nginx-deployment-7b8c6f4498-nc94z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nc94z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-nc94z,UID:c2aef38a-f5c2-400d-a043-f57e6fea66e8,ResourceVersion:11215961,Generation:0,CreationTimestamp:2020-05-16 13:20:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002e863c7 0xc002e863c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e86450} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e86470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.432: INFO: Pod "nginx-deployment-7b8c6f4498-psbjw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-psbjw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-psbjw,UID:20c6211b-b895-4037-89d4-82704a7799c0,ResourceVersion:11215977,Generation:0,CreationTimestamp:2020-05-16 13:20:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002e86537 0xc002e86538}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e865b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e865d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.432: INFO: Pod "nginx-deployment-7b8c6f4498-q4nrv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q4nrv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-q4nrv,UID:e6785ab7-a387-4cb0-b78b-22f7972aa23c,ResourceVersion:11215982,Generation:0,CreationTimestamp:2020-05-16 13:20:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002e86697 0xc002e86698}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e86710} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e86730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.432: INFO: Pod "nginx-deployment-7b8c6f4498-sbwh9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sbwh9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-sbwh9,UID:5808a74b-53ea-4341-8479-ad9a3def8095,ResourceVersion:11215825,Generation:0,CreationTimestamp:2020-05-16 13:20:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002e867f7 0xc002e867f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e86870} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e86890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.225,StartTime:2020-05-16 13:20:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-16 13:20:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://308df3511e7bf4ccffc5d14c1147566fa2ea56f9b15a4a096c49cebdb433df91}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.432: INFO: Pod "nginx-deployment-7b8c6f4498-tmsrg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tmsrg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-tmsrg,UID:11ad2a97-bc75-4214-98be-dd028d40b7a3,ResourceVersion:11215957,Generation:0,CreationTimestamp:2020-05-16 13:20:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002e86967 0xc002e86968}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e869e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e86a00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 16 13:20:46.432: INFO: Pod "nginx-deployment-7b8c6f4498-w8fs8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w8fs8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7497,SelfLink:/api/v1/namespaces/deployment-7497/pods/nginx-deployment-7b8c6f4498-w8fs8,UID:69d34a1a-46d4-4b3f-83f9-8b6021bdd80b,ResourceVersion:11216026,Generation:0,CreationTimestamp:2020-05-16 13:20:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 c56659c9-3e0e-4c92-95c4-00d3ba96f1d0 0xc002e86ac7 0xc002e86ac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wnmx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wnmx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wnmx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e86b40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e86b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:20:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-16 13:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:20:46.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7497" for this suite. May 16 13:21:15.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:21:15.608: INFO: namespace deployment-7497 deletion completed in 28.869060612s • [SLOW TEST:47.159 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:21:15.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-3d4ca07c-9944-4c82-b10b-9cb8c2e1e2c3 STEP: Creating a pod to test consume secrets May 16 13:21:15.733: INFO: Waiting up to 5m0s for pod "pod-secrets-6dd12d81-16e8-449e-bb14-e25660c37dc6" in namespace "secrets-1307" to be "success or failure" May 16 13:21:15.961: INFO: Pod "pod-secrets-6dd12d81-16e8-449e-bb14-e25660c37dc6": Phase="Pending", Reason="", readiness=false. Elapsed: 227.270804ms May 16 13:21:17.965: INFO: Pod "pod-secrets-6dd12d81-16e8-449e-bb14-e25660c37dc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231075183s May 16 13:21:19.978: INFO: Pod "pod-secrets-6dd12d81-16e8-449e-bb14-e25660c37dc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.244570452s STEP: Saw pod success May 16 13:21:19.978: INFO: Pod "pod-secrets-6dd12d81-16e8-449e-bb14-e25660c37dc6" satisfied condition "success or failure" May 16 13:21:19.980: INFO: Trying to get logs from node iruya-worker pod pod-secrets-6dd12d81-16e8-449e-bb14-e25660c37dc6 container secret-volume-test: STEP: delete the pod May 16 13:21:20.003: INFO: Waiting for pod pod-secrets-6dd12d81-16e8-449e-bb14-e25660c37dc6 to disappear May 16 13:21:20.013: INFO: Pod pod-secrets-6dd12d81-16e8-449e-bb14-e25660c37dc6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:21:20.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1307" for this suite. May 16 13:21:26.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:21:26.134: INFO: namespace secrets-1307 deletion completed in 6.118466372s • [SLOW TEST:10.525 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:21:26.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8889 STEP: creating a selector STEP: Creating the service pods in kubernetes May 16 13:21:26.213: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 16 13:21:54.316: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.154 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8889 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 13:21:54.316: INFO: >>> kubeConfig: /root/.kube/config I0516 13:21:54.348427 6 log.go:172] (0xc000993290) (0xc0009dfea0) Create stream I0516 13:21:54.348453 6 log.go:172] (0xc000993290) (0xc0009dfea0) Stream added, broadcasting: 1 I0516 13:21:54.351013 6 log.go:172] (0xc000993290) Reply frame received for 1 I0516 13:21:54.351047 6 log.go:172] (0xc000993290) (0xc0009dff40) Create stream I0516 13:21:54.351057 6 log.go:172] (0xc000993290) (0xc0009dff40) Stream added, broadcasting: 3 I0516 13:21:54.351876 6 log.go:172] (0xc000993290) Reply frame received for 3 I0516 13:21:54.351922 6 log.go:172] (0xc000993290) (0xc001fa2780) Create stream I0516 13:21:54.351934 6 log.go:172] (0xc000993290) (0xc001fa2780) Stream added, broadcasting: 5 I0516 13:21:54.352743 6 log.go:172] (0xc000993290) Reply frame received for 5 I0516 13:21:55.454000 6 log.go:172] (0xc000993290) Data frame received for 5 I0516 13:21:55.454048 6 log.go:172] (0xc001fa2780) (5) Data frame handling I0516 13:21:55.454097 6 log.go:172] (0xc000993290) Data frame received for 3 I0516 13:21:55.454116 6 log.go:172] (0xc0009dff40) (3) Data frame handling I0516 13:21:55.454127 6 log.go:172] (0xc0009dff40) (3) Data frame sent I0516 13:21:55.454136 6 log.go:172] (0xc000993290) Data frame received for 3 I0516 13:21:55.454150 6 log.go:172] (0xc0009dff40) (3) Data frame handling I0516 13:21:55.460442 6 log.go:172] (0xc000993290) Data frame received for 1 I0516 13:21:55.460487 6 log.go:172] (0xc0009dfea0) (1) Data frame handling I0516 13:21:55.460533 6 log.go:172] (0xc0009dfea0) (1) Data frame sent I0516 13:21:55.460874 6 log.go:172] (0xc000993290) (0xc0009dfea0) Stream removed, broadcasting: 1 I0516 13:21:55.461018 6 log.go:172] (0xc000993290) (0xc0009dfea0) Stream removed, broadcasting: 1 I0516 13:21:55.461043 6 log.go:172] (0xc000993290) (0xc0009dff40) Stream removed, broadcasting: 3 I0516 13:21:55.461845 6 log.go:172] (0xc000993290) (0xc001fa2780) Stream removed, broadcasting: 5 May 16 13:21:55.462: INFO: Found all expected endpoints: [netserver-0] I0516 13:21:55.463021 6 log.go:172] (0xc000993290) Go away received May 16 13:21:55.475: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.243 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8889 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 13:21:55.475: INFO: >>> kubeConfig: /root/.kube/config I0516 13:21:55.497389 6 log.go:172] (0xc000b3a790) (0xc000f4a780) Create stream I0516 13:21:55.497417 6 log.go:172] (0xc000b3a790) (0xc000f4a780) Stream added, broadcasting: 1 I0516 13:21:55.498859 6 log.go:172] (0xc000b3a790) Reply frame received for 1 I0516 13:21:55.498900 6 log.go:172] (0xc000b3a790) (0xc000f4a820) Create stream I0516 13:21:55.498913 6 log.go:172] (0xc000b3a790) (0xc000f4a820) Stream added, broadcasting: 3 I0516 13:21:55.499820 6 log.go:172] (0xc000b3a790) Reply frame received for 3 I0516 13:21:55.499866 6 log.go:172] (0xc000b3a790) (0xc000f4a960) Create stream I0516 13:21:55.499878 6 log.go:172] (0xc000b3a790) (0xc000f4a960) Stream added, broadcasting: 5 I0516 13:21:55.500641 6 log.go:172] (0xc000b3a790) Reply frame received for 5 I0516 13:21:56.568412 6 log.go:172] (0xc000b3a790) Data frame received for 3 I0516 13:21:56.568521 6 log.go:172] (0xc000f4a820) (3) Data frame handling I0516 13:21:56.568558 6 log.go:172] (0xc000f4a820) (3) Data frame sent I0516 13:21:56.568584 6 log.go:172] (0xc000b3a790) Data frame received for 3 I0516 13:21:56.568607 6 log.go:172] (0xc000f4a820) (3) Data frame handling I0516 13:21:56.568700 6 log.go:172] (0xc000b3a790) Data frame received for 5 I0516 13:21:56.568716 6 log.go:172] (0xc000f4a960) (5) Data frame handling I0516 13:21:56.571641 6 log.go:172] (0xc000b3a790) Data frame received for 1 I0516 13:21:56.571662 6 log.go:172] (0xc000f4a780) (1) Data frame handling I0516 13:21:56.571678 6 log.go:172] (0xc000f4a780) (1) Data frame sent I0516 13:21:56.571686 6 log.go:172] (0xc000b3a790) (0xc000f4a780) Stream removed, broadcasting: 1 I0516 13:21:56.571762 6 log.go:172] (0xc000b3a790) (0xc000f4a780) Stream removed, broadcasting: 1 I0516 13:21:56.571780 6 log.go:172] (0xc000b3a790) (0xc000f4a820) Stream removed, broadcasting: 3 I0516 13:21:56.571856 6 log.go:172] (0xc000b3a790) Go away received I0516 13:21:56.571991 6 log.go:172] (0xc000b3a790) (0xc000f4a960) Stream removed, broadcasting: 5 May 16 13:21:56.572: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:21:56.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8889" for this suite. May 16 13:22:20.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:22:20.665: INFO: namespace pod-network-test-8889 deletion completed in 24.088621877s • [SLOW TEST:54.531 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:22:20.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-6bb32135-df47-40d1-b72c-94782a6fd4ad STEP: Creating secret with name s-test-opt-upd-849d37ad-5c75-40df-8cf2-637d9fd4e707 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-6bb32135-df47-40d1-b72c-94782a6fd4ad STEP: Updating secret s-test-opt-upd-849d37ad-5c75-40df-8cf2-637d9fd4e707 STEP: Creating secret with name s-test-opt-create-36adb382-8359-4058-bf3e-fa7f6af415e9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:22:28.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8391" for this suite. May 16 13:22:50.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:22:51.000: INFO: namespace projected-8391 deletion completed in 22.123042811s • [SLOW TEST:30.334 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:22:51.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 16 13:22:51.070: INFO: Waiting up to 5m0s for pod "pod-cff7e6eb-6c44-41a1-a436-ae758681fffc" in namespace "emptydir-4313" to be "success or failure" May 16 13:22:51.081: INFO: Pod "pod-cff7e6eb-6c44-41a1-a436-ae758681fffc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.52181ms May 16 13:22:53.085: INFO: Pod "pod-cff7e6eb-6c44-41a1-a436-ae758681fffc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014880486s May 16 13:22:55.089: INFO: Pod "pod-cff7e6eb-6c44-41a1-a436-ae758681fffc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018726726s STEP: Saw pod success May 16 13:22:55.089: INFO: Pod "pod-cff7e6eb-6c44-41a1-a436-ae758681fffc" satisfied condition "success or failure" May 16 13:22:55.092: INFO: Trying to get logs from node iruya-worker2 pod pod-cff7e6eb-6c44-41a1-a436-ae758681fffc container test-container: STEP: delete the pod May 16 13:22:55.112: INFO: Waiting for pod pod-cff7e6eb-6c44-41a1-a436-ae758681fffc to disappear May 16 13:22:55.116: INFO: Pod pod-cff7e6eb-6c44-41a1-a436-ae758681fffc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:22:55.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4313" for this suite. May 16 13:23:01.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:23:01.207: INFO: namespace emptydir-4313 deletion completed in 6.088506708s • [SLOW TEST:10.206 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:23:01.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc May 16 13:23:01.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7033' May 16 13:23:01.531: INFO: stderr: "" May 16 13:23:01.531: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. May 16 13:23:02.534: INFO: Selector matched 1 pods for map[app:redis] May 16 13:23:02.534: INFO: Found 0 / 1 May 16 13:23:03.536: INFO: Selector matched 1 pods for map[app:redis] May 16 13:23:03.536: INFO: Found 0 / 1 May 16 13:23:04.536: INFO: Selector matched 1 pods for map[app:redis] May 16 13:23:04.536: INFO: Found 0 / 1 May 16 13:23:05.535: INFO: Selector matched 1 pods for map[app:redis] May 16 13:23:05.535: INFO: Found 1 / 1 May 16 13:23:05.535: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 16 13:23:05.539: INFO: Selector matched 1 pods for map[app:redis] May 16 13:23:05.539: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 16 13:23:05.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vddsn redis-master --namespace=kubectl-7033' May 16 13:23:05.652: INFO: stderr: "" May 16 13:23:05.652: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 16 May 13:23:04.383 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 16 May 13:23:04.383 # Server started, Redis version 3.2.12\n1:M 16 May 13:23:04.383 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 16 May 13:23:04.383 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 16 13:23:05.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vddsn redis-master --namespace=kubectl-7033 --tail=1' May 16 13:23:05.742: INFO: stderr: "" May 16 13:23:05.742: INFO: stdout: "1:M 16 May 13:23:04.383 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 16 13:23:05.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vddsn redis-master --namespace=kubectl-7033 --limit-bytes=1' May 16 13:23:05.841: INFO: stderr: "" May 16 13:23:05.841: INFO: stdout: " " STEP: exposing timestamps May 16 13:23:05.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vddsn redis-master --namespace=kubectl-7033 --tail=1 --timestamps' May 16 13:23:05.953: INFO: stderr: "" May 16 13:23:05.953: INFO: stdout: "2020-05-16T13:23:04.383621836Z 1:M 16 May 13:23:04.383 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 16 13:23:08.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vddsn redis-master --namespace=kubectl-7033 --since=1s' May 16 13:23:08.553: INFO: stderr: "" May 16 13:23:08.553: INFO: stdout: "" May 16 13:23:08.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vddsn redis-master --namespace=kubectl-7033 --since=24h' May 16 13:23:08.649: INFO: stderr: "" May 16 13:23:08.649: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 16 May 13:23:04.383 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 16 May 13:23:04.383 # Server started, Redis version 3.2.12\n1:M 16 May 13:23:04.383 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 16 May 13:23:04.383 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources May 16 13:23:08.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7033' May 16 13:23:08.756: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 13:23:08.756: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 16 13:23:08.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-7033' May 16 13:23:08.868: INFO: stderr: "No resources found.\n" May 16 13:23:08.868: INFO: stdout: "" May 16 13:23:08.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-7033 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 16 13:23:08.956: INFO: stderr: "" May 16 13:23:08.957: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:23:08.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7033" for this suite. May 16 13:23:31.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:23:31.088: INFO: namespace kubectl-7033 deletion completed in 22.090199039s • [SLOW TEST:29.881 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:23:31.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium May 16 13:23:31.173: INFO: Waiting up to 5m0s for pod "pod-11949c38-6f8c-4879-9dcd-96628bc1f5a6" in namespace "emptydir-1580" to be "success or failure" May 16 13:23:31.182: INFO: Pod "pod-11949c38-6f8c-4879-9dcd-96628bc1f5a6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.007107ms May 16 13:23:33.186: INFO: Pod "pod-11949c38-6f8c-4879-9dcd-96628bc1f5a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012607666s May 16 13:23:35.191: INFO: Pod "pod-11949c38-6f8c-4879-9dcd-96628bc1f5a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017263618s STEP: Saw pod success May 16 13:23:35.191: INFO: Pod "pod-11949c38-6f8c-4879-9dcd-96628bc1f5a6" satisfied condition "success or failure" May 16 13:23:35.194: INFO: Trying to get logs from node iruya-worker2 pod pod-11949c38-6f8c-4879-9dcd-96628bc1f5a6 container test-container: STEP: delete the pod May 16 13:23:35.318: INFO: Waiting for pod pod-11949c38-6f8c-4879-9dcd-96628bc1f5a6 to disappear May 16 13:23:35.332: INFO: Pod pod-11949c38-6f8c-4879-9dcd-96628bc1f5a6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:23:35.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1580" for this suite. May 16 13:23:41.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:23:41.501: INFO: namespace emptydir-1580 deletion completed in 6.16427737s • [SLOW TEST:10.412 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:23:41.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-hxv4 STEP: Creating a pod to test atomic-volume-subpath May 16 13:23:41.580: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-hxv4" in namespace "subpath-2499" to be "success or failure" May 16 13:23:41.584: INFO: Pod "pod-subpath-test-projected-hxv4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.408342ms May 16 13:23:43.590: INFO: Pod "pod-subpath-test-projected-hxv4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010006227s May 16 13:23:45.595: INFO: Pod "pod-subpath-test-projected-hxv4": Phase="Running", Reason="", readiness=true. Elapsed: 4.01413549s May 16 13:23:47.603: INFO: Pod "pod-subpath-test-projected-hxv4": Phase="Running", Reason="", readiness=true. Elapsed: 6.022142829s May 16 13:23:49.607: INFO: Pod "pod-subpath-test-projected-hxv4": Phase="Running", Reason="", readiness=true. Elapsed: 8.026321042s May 16 13:23:51.611: INFO: Pod "pod-subpath-test-projected-hxv4": Phase="Running", Reason="", readiness=true. Elapsed: 10.030602083s May 16 13:23:53.615: INFO: Pod "pod-subpath-test-projected-hxv4": Phase="Running", Reason="", readiness=true. Elapsed: 12.034378433s May 16 13:23:55.619: INFO: Pod "pod-subpath-test-projected-hxv4": Phase="Running", Reason="", readiness=true. Elapsed: 14.038479605s May 16 13:23:57.623: INFO: Pod "pod-subpath-test-projected-hxv4": Phase="Running", Reason="", readiness=true. Elapsed: 16.042170592s May 16 13:23:59.626: INFO: Pod "pod-subpath-test-projected-hxv4": Phase="Running", Reason="", readiness=true. Elapsed: 18.045426019s May 16 13:24:01.630: INFO: Pod "pod-subpath-test-projected-hxv4": Phase="Running", Reason="", readiness=true. Elapsed: 20.049845938s May 16 13:24:03.634: INFO: Pod "pod-subpath-test-projected-hxv4": Phase="Running", Reason="", readiness=true. Elapsed: 22.053918846s May 16 13:24:05.639: INFO: Pod "pod-subpath-test-projected-hxv4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.058364099s STEP: Saw pod success May 16 13:24:05.639: INFO: Pod "pod-subpath-test-projected-hxv4" satisfied condition "success or failure" May 16 13:24:05.642: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-hxv4 container test-container-subpath-projected-hxv4: STEP: delete the pod May 16 13:24:05.665: INFO: Waiting for pod pod-subpath-test-projected-hxv4 to disappear May 16 13:24:05.670: INFO: Pod pod-subpath-test-projected-hxv4 no longer exists STEP: Deleting pod pod-subpath-test-projected-hxv4 May 16 13:24:05.670: INFO: Deleting pod "pod-subpath-test-projected-hxv4" in namespace "subpath-2499" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:24:05.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2499" for this suite. May 16 13:24:11.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:24:11.786: INFO: namespace subpath-2499 deletion completed in 6.110458963s • [SLOW TEST:30.285 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:24:11.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 16 13:24:11.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7895' May 16 13:24:11.951: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 16 13:24:11.951: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 May 16 13:24:13.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7895' May 16 13:24:14.155: INFO: stderr: "" May 16 13:24:14.155: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:24:14.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7895" for this suite. May 16 13:26:14.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:26:14.317: INFO: namespace kubectl-7895 deletion completed in 2m0.12669213s • [SLOW TEST:122.531 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:26:14.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:26:19.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6067" for this suite. May 16 13:26:26.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:26:26.103: INFO: namespace watch-6067 deletion completed in 6.180009966s • [SLOW TEST:11.785 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:26:26.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 16 13:26:26.186: INFO: Waiting up to 5m0s for pod "downwardapi-volume-93b936ae-8ee1-4ec3-842b-9342ad0d0c9d" in namespace "downward-api-6794" to be "success or failure" May 16 13:26:26.203: INFO: Pod "downwardapi-volume-93b936ae-8ee1-4ec3-842b-9342ad0d0c9d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.383085ms May 16 13:26:28.427: INFO: Pod "downwardapi-volume-93b936ae-8ee1-4ec3-842b-9342ad0d0c9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240307706s May 16 13:26:30.430: INFO: Pod "downwardapi-volume-93b936ae-8ee1-4ec3-842b-9342ad0d0c9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.243904109s STEP: Saw pod success May 16 13:26:30.430: INFO: Pod "downwardapi-volume-93b936ae-8ee1-4ec3-842b-9342ad0d0c9d" satisfied condition "success or failure" May 16 13:26:30.432: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-93b936ae-8ee1-4ec3-842b-9342ad0d0c9d container client-container: STEP: delete the pod May 16 13:26:30.463: INFO: Waiting for pod downwardapi-volume-93b936ae-8ee1-4ec3-842b-9342ad0d0c9d to disappear May 16 13:26:30.491: INFO: Pod downwardapi-volume-93b936ae-8ee1-4ec3-842b-9342ad0d0c9d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:26:30.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6794" for this suite. May 16 13:26:36.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:26:36.592: INFO: namespace downward-api-6794 deletion completed in 6.097411711s • [SLOW TEST:10.489 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:26:36.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 16 13:26:36.680: INFO: Waiting up to 5m0s for pod "pod-67177343-bbf3-43ad-9092-f83994309566" in namespace "emptydir-3772" to be "success or failure" May 16 13:26:36.704: INFO: Pod "pod-67177343-bbf3-43ad-9092-f83994309566": Phase="Pending", Reason="", readiness=false. Elapsed: 24.40294ms May 16 13:26:38.708: INFO: Pod "pod-67177343-bbf3-43ad-9092-f83994309566": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028186319s May 16 13:26:40.712: INFO: Pod "pod-67177343-bbf3-43ad-9092-f83994309566": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032130904s STEP: Saw pod success May 16 13:26:40.712: INFO: Pod "pod-67177343-bbf3-43ad-9092-f83994309566" satisfied condition "success or failure" May 16 13:26:40.715: INFO: Trying to get logs from node iruya-worker2 pod pod-67177343-bbf3-43ad-9092-f83994309566 container test-container: STEP: delete the pod May 16 13:26:40.747: INFO: Waiting for pod pod-67177343-bbf3-43ad-9092-f83994309566 to disappear May 16 13:26:40.750: INFO: Pod pod-67177343-bbf3-43ad-9092-f83994309566 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:26:40.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3772" for this suite. May 16 13:26:46.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:26:46.911: INFO: namespace emptydir-3772 deletion completed in 6.156979456s • [SLOW TEST:10.318 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:26:46.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5935 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-5935 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5935 May 16 13:26:47.000: INFO: Found 0 stateful pods, waiting for 1 May 16 13:26:57.004: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 16 13:26:57.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 16 13:27:00.061: INFO: stderr: "I0516 13:26:59.911958 640 log.go:172] (0xc000146e70) (0xc000800820) Create stream\nI0516 13:26:59.911995 640 log.go:172] (0xc000146e70) (0xc000800820) Stream added, broadcasting: 1\nI0516 13:26:59.913951 640 log.go:172] (0xc000146e70) Reply frame received for 1\nI0516 13:26:59.913994 640 log.go:172] (0xc000146e70) (0xc0008008c0) Create stream\nI0516 13:26:59.914010 640 log.go:172] (0xc000146e70) (0xc0008008c0) Stream added, broadcasting: 3\nI0516 13:26:59.914790 640 log.go:172] (0xc000146e70) Reply frame received for 3\nI0516 13:26:59.914841 640 log.go:172] (0xc000146e70) (0xc00045e000) Create stream\nI0516 13:26:59.914855 640 log.go:172] (0xc000146e70) (0xc00045e000) Stream added, broadcasting: 5\nI0516 13:26:59.915500 640 log.go:172] (0xc000146e70) Reply frame received for 5\nI0516 13:27:00.012584 640 log.go:172] (0xc000146e70) Data frame received for 5\nI0516 13:27:00.012605 640 log.go:172] (0xc00045e000) (5) Data frame handling\nI0516 13:27:00.012616 640 log.go:172] (0xc00045e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0516 13:27:00.054617 640 log.go:172] (0xc000146e70) Data frame received for 3\nI0516 13:27:00.054678 640 log.go:172] (0xc0008008c0) (3) Data frame handling\nI0516 13:27:00.054700 640 log.go:172] (0xc0008008c0) (3) Data frame sent\nI0516 13:27:00.054711 640 log.go:172] (0xc000146e70) Data frame received for 3\nI0516 13:27:00.054719 640 log.go:172] (0xc0008008c0) (3) Data frame handling\nI0516 13:27:00.054789 640 log.go:172] (0xc000146e70) Data frame received for 5\nI0516 13:27:00.054829 640 log.go:172] (0xc00045e000) (5) Data frame handling\nI0516 13:27:00.057002 640 log.go:172] (0xc000146e70) Data frame received for 1\nI0516 13:27:00.057021 640 log.go:172] (0xc000800820) (1) Data frame handling\nI0516 13:27:00.057029 640 log.go:172] (0xc000800820) (1) Data frame sent\nI0516 13:27:00.057241 640 log.go:172] (0xc000146e70) (0xc000800820) Stream removed, broadcasting: 1\nI0516 13:27:00.057377 640 log.go:172] (0xc000146e70) Go away received\nI0516 13:27:00.057541 640 log.go:172] (0xc000146e70) (0xc000800820) Stream removed, broadcasting: 1\nI0516 13:27:00.057564 640 log.go:172] (0xc000146e70) (0xc0008008c0) Stream removed, broadcasting: 3\nI0516 13:27:00.057573 640 log.go:172] (0xc000146e70) (0xc00045e000) Stream removed, broadcasting: 5\n" May 16 13:27:00.061: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 16 13:27:00.061: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 16 13:27:00.066: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 16 13:27:10.076: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 16 13:27:10.076: INFO: Waiting for statefulset status.replicas updated to 0 May 16 13:27:10.093: INFO: POD NODE PHASE GRACE CONDITIONS May 16 13:27:10.093: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC }] May 16 13:27:10.093: INFO: May 16 13:27:10.093: INFO: StatefulSet ss has not reached scale 3, at 1 May 16 13:27:11.097: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990258706s May 16 13:27:12.102: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985605256s May 16 13:27:13.107: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981104159s May 16 13:27:14.133: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.976079579s May 16 13:27:15.138: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.949920332s May 16 13:27:16.144: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.944918019s May 16 13:27:17.148: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.939401017s May 16 13:27:18.153: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.935003633s May 16 13:27:19.157: INFO: Verifying statefulset ss doesn't scale past 3 for another 930.034635ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5935 May 16 13:27:20.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:27:20.377: INFO: stderr: "I0516 13:27:20.300581 672 log.go:172] (0xc00012ae70) (0xc00083c6e0) Create stream\nI0516 13:27:20.300648 672 log.go:172] (0xc00012ae70) (0xc00083c6e0) Stream added, broadcasting: 1\nI0516 13:27:20.303514 672 log.go:172] (0xc00012ae70) Reply frame received for 1\nI0516 13:27:20.303562 672 log.go:172] (0xc00012ae70) (0xc00083c780) Create stream\nI0516 13:27:20.303576 672 log.go:172] (0xc00012ae70) (0xc00083c780) Stream added, broadcasting: 3\nI0516 13:27:20.304694 672 log.go:172] (0xc00012ae70) Reply frame received for 3\nI0516 13:27:20.304737 672 log.go:172] (0xc00012ae70) (0xc0005aa280) Create stream\nI0516 13:27:20.304751 672 log.go:172] (0xc00012ae70) (0xc0005aa280) Stream added, broadcasting: 5\nI0516 13:27:20.305940 672 log.go:172] (0xc00012ae70) Reply frame received for 5\nI0516 13:27:20.370943 672 log.go:172] (0xc00012ae70) Data frame received for 5\nI0516 13:27:20.370976 672 log.go:172] (0xc0005aa280) (5) Data frame handling\nI0516 13:27:20.370995 672 log.go:172] (0xc0005aa280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0516 13:27:20.371035 672 log.go:172] (0xc00012ae70) Data frame received for 5\nI0516 13:27:20.371045 672 log.go:172] (0xc0005aa280) (5) Data frame handling\nI0516 13:27:20.371067 672 log.go:172] (0xc00012ae70) Data frame received for 3\nI0516 13:27:20.371080 672 log.go:172] (0xc00083c780) (3) Data frame handling\nI0516 13:27:20.371104 672 log.go:172] (0xc00083c780) (3) Data frame sent\nI0516 13:27:20.371113 672 log.go:172] (0xc00012ae70) Data frame received for 3\nI0516 13:27:20.371121 672 log.go:172] (0xc00083c780) (3) Data frame handling\nI0516 13:27:20.372311 672 log.go:172] (0xc00012ae70) Data frame received for 1\nI0516 13:27:20.372355 672 log.go:172] (0xc00083c6e0) (1) Data frame handling\nI0516 13:27:20.372375 672 log.go:172] (0xc00083c6e0) (1) Data frame sent\nI0516 13:27:20.372396 672 log.go:172] (0xc00012ae70) (0xc00083c6e0) Stream removed, broadcasting: 1\nI0516 13:27:20.372424 672 log.go:172] (0xc00012ae70) Go away received\nI0516 13:27:20.372842 672 log.go:172] (0xc00012ae70) (0xc00083c6e0) Stream removed, broadcasting: 1\nI0516 13:27:20.372866 672 log.go:172] (0xc00012ae70) (0xc00083c780) Stream removed, broadcasting: 3\nI0516 13:27:20.372875 672 log.go:172] (0xc00012ae70) (0xc0005aa280) Stream removed, broadcasting: 5\n" May 16 13:27:20.377: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 16 13:27:20.377: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 16 13:27:20.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:27:20.589: INFO: stderr: "I0516 13:27:20.508480 692 log.go:172] (0xc000958370) (0xc00088c640) Create stream\nI0516 13:27:20.508550 692 log.go:172] (0xc000958370) (0xc00088c640) Stream added, broadcasting: 1\nI0516 13:27:20.510696 692 log.go:172] (0xc000958370) Reply frame received for 1\nI0516 13:27:20.510751 692 log.go:172] (0xc000958370) (0xc000926000) Create stream\nI0516 13:27:20.510772 692 log.go:172] (0xc000958370) (0xc000926000) Stream added, broadcasting: 3\nI0516 13:27:20.511860 692 log.go:172] (0xc000958370) Reply frame received for 3\nI0516 13:27:20.511917 692 log.go:172] (0xc000958370) (0xc00097e000) Create stream\nI0516 13:27:20.511952 692 log.go:172] (0xc000958370) (0xc00097e000) Stream added, broadcasting: 5\nI0516 13:27:20.512962 692 log.go:172] (0xc000958370) Reply frame received for 5\nI0516 13:27:20.582204 692 log.go:172] (0xc000958370) Data frame received for 5\nI0516 13:27:20.582246 692 log.go:172] (0xc00097e000) (5) Data frame handling\nI0516 13:27:20.582264 692 log.go:172] (0xc00097e000) (5) Data frame sent\nI0516 13:27:20.582280 692 log.go:172] (0xc000958370) Data frame received for 5\nI0516 13:27:20.582293 692 log.go:172] (0xc00097e000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0516 13:27:20.582313 692 log.go:172] (0xc000958370) Data frame received for 1\nI0516 13:27:20.582427 692 log.go:172] (0xc00088c640) (1) Data frame handling\nI0516 13:27:20.582455 692 log.go:172] (0xc00088c640) (1) Data frame sent\nI0516 13:27:20.582472 692 log.go:172] (0xc000958370) (0xc00088c640) Stream removed, broadcasting: 1\nI0516 13:27:20.582493 692 log.go:172] (0xc000958370) Data frame received for 3\nI0516 13:27:20.582521 692 log.go:172] (0xc000926000) (3) Data frame handling\nI0516 13:27:20.582544 692 log.go:172] (0xc000926000) (3) Data frame sent\nI0516 13:27:20.582557 692 log.go:172] (0xc000958370) Data frame received for 3\nI0516 13:27:20.582579 692 log.go:172] (0xc000926000) (3) Data frame handling\nI0516 13:27:20.582597 692 log.go:172] (0xc000958370) Go away received\nI0516 13:27:20.582845 692 log.go:172] (0xc000958370) (0xc00088c640) Stream removed, broadcasting: 1\nI0516 13:27:20.582864 692 log.go:172] (0xc000958370) (0xc000926000) Stream removed, broadcasting: 3\nI0516 13:27:20.582876 692 log.go:172] (0xc000958370) (0xc00097e000) Stream removed, broadcasting: 5\n" May 16 13:27:20.589: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 16 13:27:20.589: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 16 13:27:20.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:27:20.791: INFO: stderr: "I0516 13:27:20.714240 713 log.go:172] (0xc00043c9a0) (0xc0005ca820) Create stream\nI0516 13:27:20.714292 713 log.go:172] (0xc00043c9a0) (0xc0005ca820) Stream added, broadcasting: 1\nI0516 13:27:20.716285 713 log.go:172] (0xc00043c9a0) Reply frame received for 1\nI0516 13:27:20.716353 713 log.go:172] (0xc00043c9a0) (0xc000898000) Create stream\nI0516 13:27:20.716370 713 log.go:172] (0xc00043c9a0) (0xc000898000) Stream added, broadcasting: 3\nI0516 13:27:20.717622 713 log.go:172] (0xc00043c9a0) Reply frame received for 3\nI0516 13:27:20.717650 713 log.go:172] (0xc00043c9a0) (0xc0005ca8c0) Create stream\nI0516 13:27:20.717657 713 log.go:172] (0xc00043c9a0) (0xc0005ca8c0) Stream added, broadcasting: 5\nI0516 13:27:20.718486 713 log.go:172] (0xc00043c9a0) Reply frame received for 5\nI0516 13:27:20.784693 713 log.go:172] (0xc00043c9a0) Data frame received for 3\nI0516 13:27:20.784724 713 log.go:172] (0xc000898000) (3) Data frame handling\nI0516 13:27:20.784746 713 log.go:172] (0xc00043c9a0) Data frame received for 5\nI0516 13:27:20.784773 713 log.go:172] (0xc0005ca8c0) (5) Data frame handling\nI0516 13:27:20.784786 713 log.go:172] (0xc0005ca8c0) (5) Data frame sent\nI0516 13:27:20.784803 713 log.go:172] (0xc00043c9a0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0516 13:27:20.784827 713 log.go:172] (0xc0005ca8c0) (5) Data frame handling\nI0516 13:27:20.784905 713 log.go:172] (0xc000898000) (3) Data frame sent\nI0516 13:27:20.784946 713 log.go:172] (0xc00043c9a0) Data frame received for 3\nI0516 13:27:20.784963 713 log.go:172] (0xc000898000) (3) Data frame handling\nI0516 13:27:20.787537 713 log.go:172] (0xc00043c9a0) Data frame received for 1\nI0516 13:27:20.787555 713 log.go:172] (0xc0005ca820) (1) Data frame handling\nI0516 13:27:20.787574 713 log.go:172] (0xc0005ca820) (1) Data frame sent\nI0516 13:27:20.787586 713 log.go:172] (0xc00043c9a0) (0xc0005ca820) Stream removed, broadcasting: 1\nI0516 13:27:20.787648 713 log.go:172] (0xc00043c9a0) Go away received\nI0516 13:27:20.787888 713 log.go:172] (0xc00043c9a0) (0xc0005ca820) Stream removed, broadcasting: 1\nI0516 13:27:20.787903 713 log.go:172] (0xc00043c9a0) (0xc000898000) Stream removed, broadcasting: 3\nI0516 13:27:20.787911 713 log.go:172] (0xc00043c9a0) (0xc0005ca8c0) Stream removed, broadcasting: 5\n" May 16 13:27:20.792: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 16 13:27:20.792: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 16 13:27:20.796: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 16 13:27:30.802: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 16 13:27:30.802: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 16 13:27:30.802: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 16 13:27:30.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 16 13:27:31.027: INFO: stderr: "I0516 13:27:30.942198 732 log.go:172] (0xc0009fa630) (0xc000640aa0) Create stream\nI0516 13:27:30.942268 732 log.go:172] (0xc0009fa630) (0xc000640aa0) Stream added, broadcasting: 1\nI0516 13:27:30.946090 732 log.go:172] (0xc0009fa630) Reply frame received for 1\nI0516 13:27:30.946170 732 log.go:172] (0xc0009fa630) (0xc000ac8000) Create stream\nI0516 13:27:30.946197 732 log.go:172] (0xc0009fa630) (0xc000ac8000) Stream added, broadcasting: 3\nI0516 13:27:30.947705 732 log.go:172] (0xc0009fa630) Reply frame received for 3\nI0516 13:27:30.947744 732 log.go:172] (0xc0009fa630) (0xc000ac80a0) Create stream\nI0516 13:27:30.947760 732 log.go:172] (0xc0009fa630) (0xc000ac80a0) Stream added, broadcasting: 5\nI0516 13:27:30.948823 732 log.go:172] (0xc0009fa630) Reply frame received for 5\nI0516 13:27:31.019748 732 log.go:172] (0xc0009fa630) Data frame received for 5\nI0516 13:27:31.019786 732 log.go:172] (0xc000ac80a0) (5) Data frame handling\nI0516 13:27:31.019818 732 log.go:172] (0xc000ac80a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0516 13:27:31.019841 732 log.go:172] (0xc0009fa630) Data frame received for 3\nI0516 13:27:31.019865 732 log.go:172] (0xc000ac8000) (3) Data frame handling\nI0516 13:27:31.019884 732 log.go:172] (0xc000ac8000) (3) Data frame sent\nI0516 13:27:31.019894 732 log.go:172] (0xc0009fa630) Data frame received for 3\nI0516 13:27:31.019902 732 log.go:172] (0xc000ac8000) (3) Data frame handling\nI0516 13:27:31.019928 732 log.go:172] (0xc0009fa630) Data frame received for 5\nI0516 13:27:31.019995 732 log.go:172] (0xc000ac80a0) (5) Data frame handling\nI0516 13:27:31.021876 732 log.go:172] (0xc0009fa630) Data frame received for 1\nI0516 13:27:31.021909 732 log.go:172] (0xc000640aa0) (1) Data frame handling\nI0516 13:27:31.021937 732 log.go:172] (0xc000640aa0) (1) Data frame sent\nI0516 13:27:31.021961 732 log.go:172] (0xc0009fa630) (0xc000640aa0) Stream removed, broadcasting: 1\nI0516 13:27:31.021993 732 log.go:172] (0xc0009fa630) Go away received\nI0516 13:27:31.022495 732 log.go:172] (0xc0009fa630) (0xc000640aa0) Stream removed, broadcasting: 1\nI0516 13:27:31.022520 732 log.go:172] (0xc0009fa630) (0xc000ac8000) Stream removed, broadcasting: 3\nI0516 13:27:31.022532 732 log.go:172] (0xc0009fa630) (0xc000ac80a0) Stream removed, broadcasting: 5\n" May 16 13:27:31.027: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 16 13:27:31.027: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 16 13:27:31.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 16 13:27:31.263: INFO: stderr: "I0516 13:27:31.161518 753 log.go:172] (0xc000a74630) (0xc0009a2820) Create stream\nI0516 13:27:31.161568 753 log.go:172] (0xc000a74630) (0xc0009a2820) Stream added, broadcasting: 1\nI0516 13:27:31.164266 753 log.go:172] (0xc000a74630) Reply frame received for 1\nI0516 13:27:31.164309 753 log.go:172] (0xc000a74630) (0xc0002e8000) Create stream\nI0516 13:27:31.164326 753 log.go:172] (0xc000a74630) (0xc0002e8000) Stream added, broadcasting: 3\nI0516 13:27:31.165776 753 log.go:172] (0xc000a74630) Reply frame received for 3\nI0516 13:27:31.165813 753 log.go:172] (0xc000a74630) (0xc0002e80a0) Create stream\nI0516 13:27:31.165822 753 log.go:172] (0xc000a74630) (0xc0002e80a0) Stream added, broadcasting: 5\nI0516 13:27:31.166708 753 log.go:172] (0xc000a74630) Reply frame received for 5\nI0516 13:27:31.217063 753 log.go:172] (0xc000a74630) Data frame received for 5\nI0516 13:27:31.217082 753 log.go:172] (0xc0002e80a0) (5) Data frame handling\nI0516 13:27:31.217091 753 log.go:172] (0xc0002e80a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0516 13:27:31.255820 753 log.go:172] (0xc000a74630) Data frame received for 5\nI0516 13:27:31.255849 753 log.go:172] (0xc0002e80a0) (5) Data frame handling\nI0516 13:27:31.255865 753 log.go:172] (0xc000a74630) Data frame received for 3\nI0516 13:27:31.255870 753 log.go:172] (0xc0002e8000) (3) Data frame handling\nI0516 13:27:31.255877 753 log.go:172] (0xc0002e8000) (3) Data frame sent\nI0516 13:27:31.255883 753 log.go:172] (0xc000a74630) Data frame received for 3\nI0516 13:27:31.255887 753 log.go:172] (0xc0002e8000) (3) Data frame handling\nI0516 13:27:31.257725 753 log.go:172] (0xc000a74630) Data frame received for 1\nI0516 13:27:31.257745 753 log.go:172] (0xc0009a2820) (1) Data frame handling\nI0516 13:27:31.257757 753 log.go:172] (0xc0009a2820) (1) Data frame sent\nI0516 13:27:31.257767 753 log.go:172] (0xc000a74630) (0xc0009a2820) Stream removed, broadcasting: 1\nI0516 13:27:31.257781 753 log.go:172] (0xc000a74630) Go away received\nI0516 13:27:31.258143 753 log.go:172] (0xc000a74630) (0xc0009a2820) Stream removed, broadcasting: 1\nI0516 13:27:31.258164 753 log.go:172] (0xc000a74630) (0xc0002e8000) Stream removed, broadcasting: 3\nI0516 13:27:31.258174 753 log.go:172] (0xc000a74630) (0xc0002e80a0) Stream removed, broadcasting: 5\n" May 16 13:27:31.264: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 16 13:27:31.264: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 16 13:27:31.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 16 13:27:31.515: INFO: stderr: "I0516 13:27:31.408678 774 log.go:172] (0xc0004fe370) (0xc00030a820) Create stream\nI0516 13:27:31.408730 774 log.go:172] (0xc0004fe370) (0xc00030a820) Stream added, broadcasting: 1\nI0516 13:27:31.410577 774 log.go:172] (0xc0004fe370) Reply frame received for 1\nI0516 13:27:31.410607 774 log.go:172] (0xc0004fe370) (0xc000992000) Create stream\nI0516 13:27:31.410617 774 log.go:172] (0xc0004fe370) (0xc000992000) Stream added, broadcasting: 3\nI0516 13:27:31.411583 774 log.go:172] (0xc0004fe370) Reply frame received for 3\nI0516 13:27:31.411620 774 log.go:172] (0xc0004fe370) (0xc00030a8c0) Create stream\nI0516 13:27:31.411633 774 log.go:172] (0xc0004fe370) (0xc00030a8c0) Stream added, broadcasting: 5\nI0516 13:27:31.412554 774 log.go:172] (0xc0004fe370) Reply frame received for 5\nI0516 13:27:31.475198 774 log.go:172] (0xc0004fe370) Data frame received for 5\nI0516 13:27:31.475222 774 log.go:172] (0xc00030a8c0) (5) Data frame handling\nI0516 13:27:31.475239 774 log.go:172] (0xc00030a8c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0516 13:27:31.507487 774 log.go:172] (0xc0004fe370) Data frame received for 3\nI0516 13:27:31.507537 774 log.go:172] (0xc000992000) (3) Data frame handling\nI0516 13:27:31.507573 774 log.go:172] (0xc000992000) (3) Data frame sent\nI0516 13:27:31.507676 774 log.go:172] (0xc0004fe370) Data frame received for 5\nI0516 13:27:31.507709 774 log.go:172] (0xc00030a8c0) (5) Data frame handling\nI0516 13:27:31.508066 774 log.go:172] (0xc0004fe370) Data frame received for 3\nI0516 13:27:31.508102 774 log.go:172] (0xc000992000) (3) Data frame handling\nI0516 13:27:31.510055 774 log.go:172] (0xc0004fe370) Data frame received for 1\nI0516 13:27:31.510102 774 log.go:172] (0xc00030a820) (1) Data frame handling\nI0516 13:27:31.510114 774 log.go:172] (0xc00030a820) (1) Data frame sent\nI0516 13:27:31.510171 774 log.go:172] (0xc0004fe370) (0xc00030a820) Stream removed, broadcasting: 1\nI0516 13:27:31.510230 774 log.go:172] (0xc0004fe370) Go away received\nI0516 13:27:31.510679 774 log.go:172] (0xc0004fe370) (0xc00030a820) Stream removed, broadcasting: 1\nI0516 13:27:31.510704 774 log.go:172] (0xc0004fe370) (0xc000992000) Stream removed, broadcasting: 3\nI0516 13:27:31.510715 774 log.go:172] (0xc0004fe370) (0xc00030a8c0) Stream removed, broadcasting: 5\n" May 16 13:27:31.515: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 16 13:27:31.515: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 16 13:27:31.515: INFO: Waiting for statefulset status.replicas updated to 0 May 16 13:27:31.518: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 16 13:27:41.526: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 16 13:27:41.526: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 16 13:27:41.526: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 16 13:27:41.537: INFO: POD NODE PHASE GRACE CONDITIONS May 16 13:27:41.537: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC }] May 16 13:27:41.537: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC }] May 16 13:27:41.537: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC }] May 16 13:27:41.537: INFO: May 16 13:27:41.537: INFO: StatefulSet ss has not reached scale 0, at 3 May 16 13:27:42.541: INFO: POD NODE PHASE GRACE CONDITIONS May 16 13:27:42.541: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC }] May 16 13:27:42.541: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC }] May 16 13:27:42.541: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC }] May 16 13:27:42.541: INFO: May 16 13:27:42.541: INFO: StatefulSet ss has not reached scale 0, at 3 May 16 13:27:43.545: INFO: POD NODE PHASE GRACE CONDITIONS May 16 13:27:43.545: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC }] May 16 13:27:43.545: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC }] May 16 13:27:43.545: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC }] May 16 13:27:43.545: INFO: May 16 13:27:43.545: INFO: StatefulSet ss has not reached scale 0, at 3 May 16 13:27:44.550: INFO: POD NODE PHASE GRACE CONDITIONS May 16 13:27:44.550: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC }] May 16 13:27:44.550: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC }] May 16 13:27:44.550: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC }] May 16 13:27:44.550: INFO: May 16 13:27:44.550: INFO: StatefulSet ss has not reached scale 0, at 3 May 16 13:27:45.555: INFO: POD NODE PHASE GRACE CONDITIONS May 16 13:27:45.556: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC }] May 16 13:27:45.556: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC }] May 16 13:27:45.556: INFO: May 16 13:27:45.556: INFO: StatefulSet ss has not reached scale 0, at 2 May 16 13:27:46.561: INFO: POD NODE PHASE GRACE CONDITIONS May 16 13:27:46.561: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC }] May 16 13:27:46.561: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC }] May 16 13:27:46.561: INFO: May 16 13:27:46.561: INFO: StatefulSet ss has not reached scale 0, at 2 May 16 13:27:47.564: INFO: POD NODE PHASE GRACE CONDITIONS May 16 13:27:47.564: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC }] May 16 13:27:47.564: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC }] May 16 13:27:47.564: INFO: May 16 13:27:47.564: INFO: StatefulSet ss has not reached scale 0, at 2 May 16 13:27:48.570: INFO: POD NODE PHASE GRACE CONDITIONS May 16 13:27:48.570: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC }] May 16 13:27:48.570: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC }] May 16 13:27:48.570: INFO: May 16 13:27:48.570: INFO: StatefulSet ss has not reached scale 0, at 2 May 16 13:27:49.575: INFO: POD NODE PHASE GRACE CONDITIONS May 16 13:27:49.575: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC }] May 16 13:27:49.575: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC }] May 16 13:27:49.575: INFO: May 16 13:27:49.575: INFO: StatefulSet ss has not reached scale 0, at 2 May 16 13:27:50.585: INFO: POD NODE PHASE GRACE CONDITIONS May 16 13:27:50.585: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:26:47 +0000 UTC }] May 16 13:27:50.585: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:27:10 +0000 UTC }] May 16 13:27:50.585: INFO: May 16 13:27:50.585: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5935 May 16 13:27:51.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:27:51.719: INFO: rc: 1 May 16 13:27:51.719: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc002bbd200 exit status 1 true [0xc000729878 0xc0007298a0 0xc0007298d8] [0xc000729878 0xc0007298a0 0xc0007298d8] [0xc000729890 0xc0007298c0] [0xba70e0 0xba70e0] 0xc001d8eea0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 May 16 13:28:01.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:28:01.821: INFO: rc: 1 May 16 13:28:01.821: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002bbd2f0 exit status 1 true [0xc0007298f0 0xc000729928 0xc000729990] [0xc0007298f0 0xc000729928 0xc000729990] [0xc000729918 0xc000729968] [0xba70e0 0xba70e0] 0xc001d8f1a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:28:11.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:28:11.923: INFO: rc: 1 May 16 13:28:11.923: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026222a0 exit status 1 true [0xc0005dfa88 0xc0005dfc78 0xc0005dfeb0] [0xc0005dfa88 0xc0005dfc78 0xc0005dfeb0] [0xc0005dfc58 0xc0005dfd98] [0xba70e0 0xba70e0] 0xc002604660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:28:21.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:28:22.024: INFO: rc: 1 May 16 13:28:22.025: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002bbd3b0 exit status 1 true [0xc0007299a8 0xc000729a00 0xc000729a58] [0xc0007299a8 0xc000729a00 0xc000729a58] [0xc0007299d0 0xc000729a28] [0xba70e0 0xba70e0] 0xc001d8f4a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:28:32.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:28:32.139: INFO: rc: 1 May 16 13:28:32.139: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002b99a70 exit status 1 true [0xc00091bbe8 0xc00091bcc0 0xc00091bd70] [0xc00091bbe8 0xc00091bcc0 0xc00091bd70] [0xc00091bc30 0xc00091bd60] [0xba70e0 0xba70e0] 0xc001aba420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:28:42.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:28:42.240: INFO: rc: 1 May 16 13:28:42.240: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002b99b60 exit status 1 true [0xc00091bd78 0xc00091be08 0xc00091bea0] [0xc00091bd78 0xc00091be08 0xc00091bea0] [0xc00091bdf8 0xc00091be60] [0xba70e0 0xba70e0] 0xc001aba900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:28:52.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:28:52.340: INFO: rc: 1 May 16 13:28:52.340: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002b99c50 exit status 1 true [0xc00091beb0 0xc00091bec8 0xc00091bf10] [0xc00091beb0 0xc00091bec8 0xc00091bf10] [0xc00091bec0 0xc00091bef0] [0xba70e0 0xba70e0] 0xc001abac00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:29:02.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:29:02.441: INFO: rc: 1 May 16 13:29:02.441: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0005bae40 exit status 1 true [0xc00205a840 0xc00205a8a8 0xc00205a910] [0xc00205a840 0xc00205a8a8 0xc00205a910] [0xc00205a880 0xc00205a8f0] [0xba70e0 0xba70e0] 0xc002f05f20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:29:12.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:29:12.545: INFO: rc: 1 May 16 13:29:12.545: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0005baf30 exit status 1 true [0xc00205a950 0xc00205a968 0xc00205a9a0] [0xc00205a950 0xc00205a968 0xc00205a9a0] [0xc00205a960 0xc00205a990] [0xba70e0 0xba70e0] 0xc001be2840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:29:22.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:29:22.657: INFO: rc: 1 May 16 13:29:22.657: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002db8090 exit status 1 true [0xc0005df098 0xc0005df2d8 0xc0005df738] [0xc0005df098 0xc0005df2d8 0xc0005df738] [0xc0005df1c8 0xc0005df558] [0xba70e0 0xba70e0] 0xc002de2060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:29:32.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:29:32.759: INFO: rc: 1 May 16 13:29:32.759: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023d00c0 exit status 1 true [0xc00205a008 0xc00205a0d0 0xc00205a0f8] [0xc00205a008 0xc00205a0d0 0xc00205a0f8] [0xc00205a098 0xc00205a0f0] [0xba70e0 0xba70e0] 0xc002f04900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:29:42.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:29:42.862: INFO: rc: 1 May 16 13:29:42.862: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001aac0c0 exit status 1 true [0xc000728048 0xc000728960 0xc000728bb0] [0xc000728048 0xc000728960 0xc000728bb0] [0xc000728668 0xc000728ad8] [0xba70e0 0xba70e0] 0xc0028ca3c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:29:52.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:29:52.952: INFO: rc: 1 May 16 13:29:52.952: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020e20f0 exit status 1 true [0xc00091a078 0xc00091a130 0xc00091a200] [0xc00091a078 0xc00091a130 0xc00091a200] [0xc00091a110 0xc00091a198] [0xba70e0 0xba70e0] 0xc002604240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:30:02.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:30:03.050: INFO: rc: 1 May 16 13:30:03.050: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020e21b0 exit status 1 true [0xc00091a230 0xc00091a2c0 0xc00091a398] [0xc00091a230 0xc00091a2c0 0xc00091a398] [0xc00091a268 0xc00091a360] [0xba70e0 0xba70e0] 0xc002604660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:30:13.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:30:13.143: INFO: rc: 1 May 16 13:30:13.143: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020e22a0 exit status 1 true [0xc00091a3b8 0xc00091a4e8 0xc00091a618] [0xc00091a3b8 0xc00091a4e8 0xc00091a618] [0xc00091a400 0xc00091a588] [0xba70e0 0xba70e0] 0xc0026049c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:30:23.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:30:23.239: INFO: rc: 1 May 16 13:30:23.239: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020e2360 exit status 1 true [0xc00091a688 0xc00091acc0 0xc00091aee0] [0xc00091a688 0xc00091acc0 0xc00091aee0] [0xc00091a878 0xc00091ae98] [0xba70e0 0xba70e0] 0xc002604cc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:30:33.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:30:33.334: INFO: rc: 1 May 16 13:30:33.334: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023d0210 exit status 1 true [0xc00205a118 0xc00205a1a8 0xc00205a1d0] [0xc00205a118 0xc00205a1a8 0xc00205a1d0] [0xc00205a188 0xc00205a1c0] [0xba70e0 0xba70e0] 0xc002f04ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:30:43.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:30:43.442: INFO: rc: 1 May 16 13:30:43.442: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020e2450 exit status 1 true [0xc00091b010 0xc00091b2b8 0xc00091b380] [0xc00091b010 0xc00091b2b8 0xc00091b380] [0xc00091b200 0xc00091b358] [0xba70e0 0xba70e0] 0xc002604fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:30:53.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:30:53.536: INFO: rc: 1 May 16 13:30:53.536: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001aac1b0 exit status 1 true [0xc000728c28 0xc000728ef0 0xc0007290d8] [0xc000728c28 0xc000728ef0 0xc0007290d8] [0xc000728e70 0xc0007290b0] [0xba70e0 0xba70e0] 0xc0028ca6c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:31:03.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:31:03.628: INFO: rc: 1 May 16 13:31:03.628: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001aac270 exit status 1 true [0xc0007290e8 0xc000729208 0xc000729288] [0xc0007290e8 0xc000729208 0xc000729288] [0xc000729148 0xc000729270] [0xba70e0 0xba70e0] 0xc0028caa20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:31:13.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:31:13.735: INFO: rc: 1 May 16 13:31:13.735: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020e2090 exit status 1 true [0xc00091a0c8 0xc00091a138 0xc00091a230] [0xc00091a0c8 0xc00091a138 0xc00091a230] [0xc00091a130 0xc00091a200] [0xba70e0 0xba70e0] 0xc002604060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:31:23.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:31:23.842: INFO: rc: 1 May 16 13:31:23.842: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001aac0f0 exit status 1 true [0xc000728048 0xc000728960 0xc000728bb0] [0xc000728048 0xc000728960 0xc000728bb0] [0xc000728668 0xc000728ad8] [0xba70e0 0xba70e0] 0xc0028ca3c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:31:33.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:31:33.944: INFO: rc: 1 May 16 13:31:33.944: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023d0090 exit status 1 true [0xc00205a008 0xc00205a0d0 0xc00205a0f8] [0xc00205a008 0xc00205a0d0 0xc00205a0f8] [0xc00205a098 0xc00205a0f0] [0xba70e0 0xba70e0] 0xc002f04900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:31:43.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:31:44.036: INFO: rc: 1 May 16 13:31:44.036: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023d0180 exit status 1 true [0xc00205a118 0xc00205a1a8 0xc00205a1d0] [0xc00205a118 0xc00205a1a8 0xc00205a1d0] [0xc00205a188 0xc00205a1c0] [0xba70e0 0xba70e0] 0xc002f04ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:31:54.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:31:54.148: INFO: rc: 1 May 16 13:31:54.148: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002db80f0 exit status 1 true [0xc0005df098 0xc0005df2d8 0xc0005df738] [0xc0005df098 0xc0005df2d8 0xc0005df738] [0xc0005df1c8 0xc0005df558] [0xba70e0 0xba70e0] 0xc002de2240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:32:04.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:32:04.250: INFO: rc: 1 May 16 13:32:04.251: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023d0270 exit status 1 true [0xc00205a1f0 0xc00205a218 0xc00205a2d8] [0xc00205a1f0 0xc00205a218 0xc00205a2d8] [0xc00205a210 0xc00205a280] [0xba70e0 0xba70e0] 0xc002f051a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:32:14.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:32:14.352: INFO: rc: 1 May 16 13:32:14.352: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020e2210 exit status 1 true [0xc00091a258 0xc00091a310 0xc00091a3b8] [0xc00091a258 0xc00091a310 0xc00091a3b8] [0xc00091a2c0 0xc00091a398] [0xba70e0 0xba70e0] 0xc002604360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:32:24.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:32:24.489: INFO: rc: 1 May 16 13:32:24.489: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002db81e0 exit status 1 true [0xc0005dfa88 0xc0005dfc78 0xc0005dfeb0] [0xc0005dfa88 0xc0005dfc78 0xc0005dfeb0] [0xc0005dfc58 0xc0005dfd98] [0xba70e0 0xba70e0] 0xc002de27e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:32:34.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:32:34.582: INFO: rc: 1 May 16 13:32:34.582: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023d0360 exit status 1 true [0xc00205a2f0 0xc00205a320 0xc00205a348] [0xc00205a2f0 0xc00205a320 0xc00205a348] [0xc00205a310 0xc00205a330] [0xba70e0 0xba70e0] 0xc002f054a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:32:44.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:32:44.689: INFO: rc: 1 May 16 13:32:44.689: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002db8300 exit status 1 true [0xc0005dffc8 0xc002ce6000 0xc002ce6018] [0xc0005dffc8 0xc002ce6000 0xc002ce6018] [0xc0000f04b8 0xc002ce6010] [0xba70e0 0xba70e0] 0xc002de2de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 16 13:32:54.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5935 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 13:32:54.788: INFO: rc: 1 May 16 13:32:54.788: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: May 16 13:32:54.788: INFO: Scaling statefulset ss to 0 May 16 13:32:54.796: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 16 13:32:54.798: INFO: Deleting all statefulset in ns statefulset-5935 May 16 13:32:54.800: INFO: Scaling statefulset ss to 0 May 16 13:32:54.807: INFO: Waiting for statefulset status.replicas updated to 0 May 16 13:32:54.809: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:32:54.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5935" for this suite. May 16 13:33:00.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:33:00.912: INFO: namespace statefulset-5935 deletion completed in 6.088728908s • [SLOW TEST:374.001 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:33:00.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod May 16 13:33:04.999: INFO: Pod pod-hostip-097e93b5-8832-44f7-970f-bcad083376ac has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:33:04.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-167" for this suite. May 16 13:33:27.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:33:27.113: INFO: namespace pods-167 deletion completed in 22.111040241s • [SLOW TEST:26.201 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:33:27.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-43de6a25-aaa9-4558-b025-404b5d63ca69 STEP: Creating a pod to test consume secrets May 16 13:33:27.200: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9a3186bf-c936-44a9-9c7b-05c7f9870cc6" in namespace "projected-6458" to be "success or failure" May 16 13:33:27.208: INFO: Pod "pod-projected-secrets-9a3186bf-c936-44a9-9c7b-05c7f9870cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.192826ms May 16 13:33:29.214: INFO: Pod "pod-projected-secrets-9a3186bf-c936-44a9-9c7b-05c7f9870cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014175359s May 16 13:33:31.218: INFO: Pod "pod-projected-secrets-9a3186bf-c936-44a9-9c7b-05c7f9870cc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017708415s STEP: Saw pod success May 16 13:33:31.218: INFO: Pod "pod-projected-secrets-9a3186bf-c936-44a9-9c7b-05c7f9870cc6" satisfied condition "success or failure" May 16 13:33:31.220: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-9a3186bf-c936-44a9-9c7b-05c7f9870cc6 container projected-secret-volume-test: STEP: delete the pod May 16 13:33:31.250: INFO: Waiting for pod pod-projected-secrets-9a3186bf-c936-44a9-9c7b-05c7f9870cc6 to disappear May 16 13:33:31.276: INFO: Pod pod-projected-secrets-9a3186bf-c936-44a9-9c7b-05c7f9870cc6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:33:31.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6458" for this suite. May 16 13:33:37.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:33:37.379: INFO: namespace projected-6458 deletion completed in 6.100077998s • [SLOW TEST:10.265 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:33:37.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:34:37.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9203" for this suite. May 16 13:35:05.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:35:05.600: INFO: namespace container-probe-9203 deletion completed in 28.140046971s • [SLOW TEST:88.221 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:35:05.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 16 13:35:09.713: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 16 13:35:24.794: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:35:24.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9664" for this suite. May 16 13:35:30.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:35:30.926: INFO: namespace pods-9664 deletion completed in 6.125026397s • [SLOW TEST:25.326 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:35:30.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0516 13:36:11.460761 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 16 13:36:11.460: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:36:11.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2099" for this suite. May 16 13:36:21.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:36:21.562: INFO: namespace gc-2099 deletion completed in 10.098353461s • [SLOW TEST:50.634 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:36:21.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 13:36:21.628: INFO: Creating deployment "test-recreate-deployment" May 16 13:36:21.643: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 16 13:36:21.655: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 16 13:36:23.662: INFO: Waiting deployment "test-recreate-deployment" to complete May 16 13:36:23.665: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725232981, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725232981, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725232981, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725232981, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 13:36:25.668: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 16 13:36:25.676: INFO: Updating deployment test-recreate-deployment May 16 13:36:25.676: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 16 13:36:26.356: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-4690,SelfLink:/apis/apps/v1/namespaces/deployment-4690/deployments/test-recreate-deployment,UID:e452a0c9-be2e-4bf4-bf31-113c6a5de2a1,ResourceVersion:11219117,Generation:2,CreationTimestamp:2020-05-16 13:36:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-16 13:36:25 +0000 UTC 2020-05-16 13:36:25 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-16 13:36:26 +0000 UTC 2020-05-16 13:36:21 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 16 13:36:26.422: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-4690,SelfLink:/apis/apps/v1/namespaces/deployment-4690/replicasets/test-recreate-deployment-5c8c9cc69d,UID:8893f0e5-4d17-4f37-a63c-51198aed0867,ResourceVersion:11219113,Generation:1,CreationTimestamp:2020-05-16 13:36:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e452a0c9-be2e-4bf4-bf31-113c6a5de2a1 0xc002d6dd37 0xc002d6dd38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 16 13:36:26.422: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 16 13:36:26.422: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-4690,SelfLink:/apis/apps/v1/namespaces/deployment-4690/replicasets/test-recreate-deployment-6df85df6b9,UID:40e964bc-5882-4610-8606-63bce8fc4862,ResourceVersion:11219105,Generation:2,CreationTimestamp:2020-05-16 13:36:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e452a0c9-be2e-4bf4-bf31-113c6a5de2a1 0xc002d6de17 0xc002d6de18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 16 13:36:26.425: INFO: Pod "test-recreate-deployment-5c8c9cc69d-j9n26" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-j9n26,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-4690,SelfLink:/api/v1/namespaces/deployment-4690/pods/test-recreate-deployment-5c8c9cc69d-j9n26,UID:62c93499-a0e7-4c63-99f8-3983d1e16dfe,ResourceVersion:11219118,Generation:0,CreationTimestamp:2020-05-16 13:36:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 8893f0e5-4d17-4f37-a63c-51198aed0867 0xc00216e707 0xc00216e708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kg5nc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kg5nc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kg5nc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00216e780} {node.kubernetes.io/unreachable Exists NoExecute 0xc00216e7a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:36:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:36:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:36:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 13:36:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-16 13:36:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:36:26.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4690" for this suite. May 16 13:36:32.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:36:32.687: INFO: namespace deployment-4690 deletion completed in 6.256763324s • [SLOW TEST:11.125 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:36:32.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 16 13:36:32.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-884' May 16 13:36:33.091: INFO: stderr: "" May 16 13:36:33.091: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 16 13:36:33.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-884' May 16 13:36:33.208: INFO: stderr: "" May 16 13:36:33.208: INFO: stdout: "update-demo-nautilus-6hqrh update-demo-nautilus-xzj6n " May 16 13:36:33.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6hqrh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-884' May 16 13:36:33.312: INFO: stderr: "" May 16 13:36:33.312: INFO: stdout: "" May 16 13:36:33.312: INFO: update-demo-nautilus-6hqrh is created but not running May 16 13:36:38.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-884' May 16 13:36:38.413: INFO: stderr: "" May 16 13:36:38.414: INFO: stdout: "update-demo-nautilus-6hqrh update-demo-nautilus-xzj6n " May 16 13:36:38.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6hqrh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-884' May 16 13:36:38.509: INFO: stderr: "" May 16 13:36:38.509: INFO: stdout: "true" May 16 13:36:38.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6hqrh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-884' May 16 13:36:38.612: INFO: stderr: "" May 16 13:36:38.612: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 13:36:38.612: INFO: validating pod update-demo-nautilus-6hqrh May 16 13:36:38.616: INFO: got data: { "image": "nautilus.jpg" } May 16 13:36:38.616: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 13:36:38.616: INFO: update-demo-nautilus-6hqrh is verified up and running May 16 13:36:38.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xzj6n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-884' May 16 13:36:38.718: INFO: stderr: "" May 16 13:36:38.718: INFO: stdout: "true" May 16 13:36:38.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xzj6n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-884' May 16 13:36:38.818: INFO: stderr: "" May 16 13:36:38.818: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 13:36:38.818: INFO: validating pod update-demo-nautilus-xzj6n May 16 13:36:38.823: INFO: got data: { "image": "nautilus.jpg" } May 16 13:36:38.823: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 13:36:38.823: INFO: update-demo-nautilus-xzj6n is verified up and running STEP: using delete to clean up resources May 16 13:36:38.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-884' May 16 13:36:39.081: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 13:36:39.081: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 16 13:36:39.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-884' May 16 13:36:39.367: INFO: stderr: "No resources found.\n" May 16 13:36:39.367: INFO: stdout: "" May 16 13:36:39.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-884 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 16 13:36:39.830: INFO: stderr: "" May 16 13:36:39.830: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:36:39.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-884" for this suite. May 16 13:36:45.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:36:45.957: INFO: namespace kubectl-884 deletion completed in 6.122801578s • [SLOW TEST:13.270 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:36:45.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 16 13:36:54.087: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 13:36:54.093: INFO: Pod pod-with-poststart-exec-hook still exists May 16 13:36:56.093: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 13:36:56.098: INFO: Pod pod-with-poststart-exec-hook still exists May 16 13:36:58.093: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 13:36:58.097: INFO: Pod pod-with-poststart-exec-hook still exists May 16 13:37:00.093: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 13:37:00.097: INFO: Pod pod-with-poststart-exec-hook still exists May 16 13:37:02.093: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 13:37:02.098: INFO: Pod pod-with-poststart-exec-hook still exists May 16 13:37:04.093: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 13:37:04.098: INFO: Pod pod-with-poststart-exec-hook still exists May 16 13:37:06.093: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 13:37:06.097: INFO: Pod pod-with-poststart-exec-hook still exists May 16 13:37:08.093: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 13:37:08.097: INFO: Pod pod-with-poststart-exec-hook still exists May 16 13:37:10.093: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 13:37:10.097: INFO: Pod pod-with-poststart-exec-hook still exists May 16 13:37:12.093: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 13:37:12.098: INFO: Pod pod-with-poststart-exec-hook still exists May 16 13:37:14.093: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 13:37:14.097: INFO: Pod pod-with-poststart-exec-hook still exists May 16 13:37:16.093: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 13:37:16.100: INFO: Pod pod-with-poststart-exec-hook still exists May 16 13:37:18.093: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 13:37:18.098: INFO: Pod pod-with-poststart-exec-hook still exists May 16 13:37:20.093: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 13:37:20.098: INFO: Pod pod-with-poststart-exec-hook still exists May 16 13:37:22.093: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 16 13:37:22.098: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:37:22.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1175" for this suite. May 16 13:37:44.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:37:44.190: INFO: namespace container-lifecycle-hook-1175 deletion completed in 22.087333518s • [SLOW TEST:58.232 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:37:44.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 16 13:37:44.306: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a7f136c-8021-4fda-adfc-88a2d44c427e" in namespace "downward-api-4403" to be "success or failure" May 16 13:37:44.309: INFO: Pod "downwardapi-volume-9a7f136c-8021-4fda-adfc-88a2d44c427e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.22915ms May 16 13:37:46.312: INFO: Pod "downwardapi-volume-9a7f136c-8021-4fda-adfc-88a2d44c427e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006709897s May 16 13:37:48.317: INFO: Pod "downwardapi-volume-9a7f136c-8021-4fda-adfc-88a2d44c427e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011768304s STEP: Saw pod success May 16 13:37:48.318: INFO: Pod "downwardapi-volume-9a7f136c-8021-4fda-adfc-88a2d44c427e" satisfied condition "success or failure" May 16 13:37:48.320: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-9a7f136c-8021-4fda-adfc-88a2d44c427e container client-container: STEP: delete the pod May 16 13:37:48.341: INFO: Waiting for pod downwardapi-volume-9a7f136c-8021-4fda-adfc-88a2d44c427e to disappear May 16 13:37:48.345: INFO: Pod downwardapi-volume-9a7f136c-8021-4fda-adfc-88a2d44c427e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:37:48.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4403" for this suite. May 16 13:37:54.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:37:54.448: INFO: namespace downward-api-4403 deletion completed in 6.09965017s • [SLOW TEST:10.258 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:37:54.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1585.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1585.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1585.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1585.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1585.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1585.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1585.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1585.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1585.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1585.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1585.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 48.190.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.190.48_udp@PTR;check="$$(dig +tcp +noall +answer +search 48.190.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.190.48_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1585.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1585.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1585.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1585.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1585.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1585.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1585.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1585.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1585.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1585.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1585.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 48.190.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.190.48_udp@PTR;check="$$(dig +tcp +noall +answer +search 48.190.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.190.48_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 16 13:38:00.707: INFO: Unable to read wheezy_udp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:00.710: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:00.712: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:00.715: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:00.730: INFO: Unable to read jessie_udp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:00.733: INFO: Unable to read jessie_tcp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:00.736: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:00.738: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:00.750: INFO: Lookups using dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76 failed for: [wheezy_udp@dns-test-service.dns-1585.svc.cluster.local wheezy_tcp@dns-test-service.dns-1585.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local jessie_udp@dns-test-service.dns-1585.svc.cluster.local jessie_tcp@dns-test-service.dns-1585.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local] May 16 13:38:05.755: INFO: Unable to read wheezy_udp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:05.759: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:05.761: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:05.764: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:05.782: INFO: Unable to read jessie_udp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:05.785: INFO: Unable to read jessie_tcp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:05.788: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:05.790: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:05.819: INFO: Lookups using dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76 failed for: [wheezy_udp@dns-test-service.dns-1585.svc.cluster.local wheezy_tcp@dns-test-service.dns-1585.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local jessie_udp@dns-test-service.dns-1585.svc.cluster.local jessie_tcp@dns-test-service.dns-1585.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local] May 16 13:38:10.755: INFO: Unable to read wheezy_udp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:10.758: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:10.760: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:10.763: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:10.783: INFO: Unable to read jessie_udp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:10.785: INFO: Unable to read jessie_tcp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:10.788: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:10.791: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:10.809: INFO: Lookups using dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76 failed for: [wheezy_udp@dns-test-service.dns-1585.svc.cluster.local wheezy_tcp@dns-test-service.dns-1585.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local jessie_udp@dns-test-service.dns-1585.svc.cluster.local jessie_tcp@dns-test-service.dns-1585.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local] May 16 13:38:15.756: INFO: Unable to read wheezy_udp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:15.759: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:15.762: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:15.765: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:15.791: INFO: Unable to read jessie_udp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:15.793: INFO: Unable to read jessie_tcp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:15.796: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:15.799: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:15.830: INFO: Lookups using dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76 failed for: [wheezy_udp@dns-test-service.dns-1585.svc.cluster.local wheezy_tcp@dns-test-service.dns-1585.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local jessie_udp@dns-test-service.dns-1585.svc.cluster.local jessie_tcp@dns-test-service.dns-1585.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local] May 16 13:38:20.774: INFO: Unable to read wheezy_udp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:20.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:20.780: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:20.783: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:20.799: INFO: Unable to read jessie_udp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:20.802: INFO: Unable to read jessie_tcp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:20.804: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:20.807: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:20.825: INFO: Lookups using dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76 failed for: [wheezy_udp@dns-test-service.dns-1585.svc.cluster.local wheezy_tcp@dns-test-service.dns-1585.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local jessie_udp@dns-test-service.dns-1585.svc.cluster.local jessie_tcp@dns-test-service.dns-1585.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local] May 16 13:38:25.755: INFO: Unable to read wheezy_udp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:25.758: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:25.762: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:25.765: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:25.785: INFO: Unable to read jessie_udp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:25.788: INFO: Unable to read jessie_tcp@dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:25.790: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:25.793: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local from pod dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76: the server could not find the requested resource (get pods dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76) May 16 13:38:25.809: INFO: Lookups using dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76 failed for: [wheezy_udp@dns-test-service.dns-1585.svc.cluster.local wheezy_tcp@dns-test-service.dns-1585.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local jessie_udp@dns-test-service.dns-1585.svc.cluster.local jessie_tcp@dns-test-service.dns-1585.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1585.svc.cluster.local] May 16 13:38:30.815: INFO: DNS probes using dns-1585/dns-test-f5abea51-cf8e-4dd7-bace-d071924c9c76 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:38:31.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1585" for this suite. May 16 13:38:37.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:38:37.916: INFO: namespace dns-1585 deletion completed in 6.100649577s • [SLOW TEST:43.468 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:38:37.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-1c5eea17-a6c5-49dd-af80-63120aa863f6 STEP: Creating secret with name secret-projected-all-test-volume-db576eba-a4b8-4852-879d-7ac63c71b685 STEP: Creating a pod to test Check all projections for projected volume plugin May 16 13:38:38.015: INFO: Waiting up to 5m0s for pod "projected-volume-c9745390-3ccf-4bb9-880d-b752c6025cce" in namespace "projected-5831" to be "success or failure" May 16 13:38:38.019: INFO: Pod "projected-volume-c9745390-3ccf-4bb9-880d-b752c6025cce": Phase="Pending", Reason="", readiness=false. Elapsed: 3.071218ms May 16 13:38:40.041: INFO: Pod "projected-volume-c9745390-3ccf-4bb9-880d-b752c6025cce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025609977s May 16 13:38:42.045: INFO: Pod "projected-volume-c9745390-3ccf-4bb9-880d-b752c6025cce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029765623s STEP: Saw pod success May 16 13:38:42.045: INFO: Pod "projected-volume-c9745390-3ccf-4bb9-880d-b752c6025cce" satisfied condition "success or failure" May 16 13:38:42.048: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-c9745390-3ccf-4bb9-880d-b752c6025cce container projected-all-volume-test: STEP: delete the pod May 16 13:38:42.434: INFO: Waiting for pod projected-volume-c9745390-3ccf-4bb9-880d-b752c6025cce to disappear May 16 13:38:42.443: INFO: Pod projected-volume-c9745390-3ccf-4bb9-880d-b752c6025cce no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:38:42.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5831" for this suite. May 16 13:38:48.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:38:48.588: INFO: namespace projected-5831 deletion completed in 6.1425335s • [SLOW TEST:10.672 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:38:48.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 16 13:38:51.858: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:38:51.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6830" for this suite. May 16 13:38:57.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:38:57.995: INFO: namespace container-runtime-6830 deletion completed in 6.112035992s • [SLOW TEST:9.406 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:38:57.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-67700f24-03b6-4db9-b2bf-2664296c761d STEP: Creating a pod to test consume secrets May 16 13:38:58.083: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-efdc6382-b0ef-4a6e-9b09-892c69ffae5e" in namespace "projected-1877" to be "success or failure" May 16 13:38:58.088: INFO: Pod "pod-projected-secrets-efdc6382-b0ef-4a6e-9b09-892c69ffae5e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.12639ms May 16 13:39:00.093: INFO: Pod "pod-projected-secrets-efdc6382-b0ef-4a6e-9b09-892c69ffae5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010138714s May 16 13:39:02.098: INFO: Pod "pod-projected-secrets-efdc6382-b0ef-4a6e-9b09-892c69ffae5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015026797s STEP: Saw pod success May 16 13:39:02.098: INFO: Pod "pod-projected-secrets-efdc6382-b0ef-4a6e-9b09-892c69ffae5e" satisfied condition "success or failure" May 16 13:39:02.102: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-efdc6382-b0ef-4a6e-9b09-892c69ffae5e container projected-secret-volume-test: STEP: delete the pod May 16 13:39:02.131: INFO: Waiting for pod pod-projected-secrets-efdc6382-b0ef-4a6e-9b09-892c69ffae5e to disappear May 16 13:39:02.142: INFO: Pod pod-projected-secrets-efdc6382-b0ef-4a6e-9b09-892c69ffae5e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:39:02.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1877" for this suite. May 16 13:39:08.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:39:08.253: INFO: namespace projected-1877 deletion completed in 6.107503797s • [SLOW TEST:10.258 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:39:08.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-d2d48485-6f17-4a4d-bfad-85b32a6213cc STEP: Creating a pod to test consume secrets May 16 13:39:08.320: INFO: Waiting up to 5m0s for pod "pod-secrets-05e0f24a-e99f-4c58-8acb-a9f81b1ae0ca" in namespace "secrets-9289" to be "success or failure" May 16 13:39:08.324: INFO: Pod "pod-secrets-05e0f24a-e99f-4c58-8acb-a9f81b1ae0ca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.495025ms May 16 13:39:10.328: INFO: Pod "pod-secrets-05e0f24a-e99f-4c58-8acb-a9f81b1ae0ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007515482s May 16 13:39:12.332: INFO: Pod "pod-secrets-05e0f24a-e99f-4c58-8acb-a9f81b1ae0ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011472421s STEP: Saw pod success May 16 13:39:12.332: INFO: Pod "pod-secrets-05e0f24a-e99f-4c58-8acb-a9f81b1ae0ca" satisfied condition "success or failure" May 16 13:39:12.335: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-05e0f24a-e99f-4c58-8acb-a9f81b1ae0ca container secret-volume-test: STEP: delete the pod May 16 13:39:12.366: INFO: Waiting for pod pod-secrets-05e0f24a-e99f-4c58-8acb-a9f81b1ae0ca to disappear May 16 13:39:12.371: INFO: Pod pod-secrets-05e0f24a-e99f-4c58-8acb-a9f81b1ae0ca no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:39:12.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9289" for this suite. May 16 13:39:18.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:39:18.481: INFO: namespace secrets-9289 deletion completed in 6.106169853s • [SLOW TEST:10.227 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:39:18.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-65bc74cd-b0e6-46b5-b621-df1c3abb4588 STEP: Creating configMap with name cm-test-opt-upd-426c0620-fec6-4135-b77e-01eef905ebab STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-65bc74cd-b0e6-46b5-b621-df1c3abb4588 STEP: Updating configmap cm-test-opt-upd-426c0620-fec6-4135-b77e-01eef905ebab STEP: Creating configMap with name cm-test-opt-create-29eac29a-5658-4358-81c5-e48756cb94bd STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:39:28.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6306" for this suite. May 16 13:39:50.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:39:50.801: INFO: namespace configmap-6306 deletion completed in 22.097283344s • [SLOW TEST:32.319 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:39:50.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info May 16 13:39:50.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 16 13:39:53.605: INFO: stderr: "" May 16 13:39:53.605: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:39:53.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2478" for this suite. May 16 13:39:59.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:39:59.715: INFO: namespace kubectl-2478 deletion completed in 6.105841197s • [SLOW TEST:8.914 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:39:59.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-d4ac1d6b-1c8b-46b0-a088-36edccd45032 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:40:05.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4601" for this suite. May 16 13:40:27.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:40:27.944: INFO: namespace configmap-4601 deletion completed in 22.107039956s • [SLOW TEST:28.229 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:40:27.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 16 13:40:27.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3624' May 16 13:40:28.292: INFO: stderr: "" May 16 13:40:28.292: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 16 13:40:28.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3624' May 16 13:40:28.416: INFO: stderr: "" May 16 13:40:28.416: INFO: stdout: "update-demo-nautilus-5s5sk update-demo-nautilus-wjd9d " May 16 13:40:28.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5s5sk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3624' May 16 13:40:28.523: INFO: stderr: "" May 16 13:40:28.523: INFO: stdout: "" May 16 13:40:28.523: INFO: update-demo-nautilus-5s5sk is created but not running May 16 13:40:33.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3624' May 16 13:40:33.624: INFO: stderr: "" May 16 13:40:33.624: INFO: stdout: "update-demo-nautilus-5s5sk update-demo-nautilus-wjd9d " May 16 13:40:33.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5s5sk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3624' May 16 13:40:33.731: INFO: stderr: "" May 16 13:40:33.731: INFO: stdout: "true" May 16 13:40:33.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5s5sk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3624' May 16 13:40:33.833: INFO: stderr: "" May 16 13:40:33.833: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 13:40:33.833: INFO: validating pod update-demo-nautilus-5s5sk May 16 13:40:33.837: INFO: got data: { "image": "nautilus.jpg" } May 16 13:40:33.837: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 13:40:33.837: INFO: update-demo-nautilus-5s5sk is verified up and running May 16 13:40:33.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjd9d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3624' May 16 13:40:33.938: INFO: stderr: "" May 16 13:40:33.938: INFO: stdout: "true" May 16 13:40:33.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjd9d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3624' May 16 13:40:34.035: INFO: stderr: "" May 16 13:40:34.035: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 13:40:34.035: INFO: validating pod update-demo-nautilus-wjd9d May 16 13:40:34.039: INFO: got data: { "image": "nautilus.jpg" } May 16 13:40:34.039: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 13:40:34.039: INFO: update-demo-nautilus-wjd9d is verified up and running STEP: scaling down the replication controller May 16 13:40:34.041: INFO: scanned /root for discovery docs: May 16 13:40:34.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3624' May 16 13:40:35.325: INFO: stderr: "" May 16 13:40:35.325: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 16 13:40:35.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3624' May 16 13:40:35.518: INFO: stderr: "" May 16 13:40:35.518: INFO: stdout: "update-demo-nautilus-5s5sk update-demo-nautilus-wjd9d " STEP: Replicas for name=update-demo: expected=1 actual=2 May 16 13:40:40.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3624' May 16 13:40:40.620: INFO: stderr: "" May 16 13:40:40.620: INFO: stdout: "update-demo-nautilus-5s5sk " May 16 13:40:40.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5s5sk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3624' May 16 13:40:40.720: INFO: stderr: "" May 16 13:40:40.720: INFO: stdout: "true" May 16 13:40:40.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5s5sk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3624' May 16 13:40:40.816: INFO: stderr: "" May 16 13:40:40.816: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 13:40:40.816: INFO: validating pod update-demo-nautilus-5s5sk May 16 13:40:40.820: INFO: got data: { "image": "nautilus.jpg" } May 16 13:40:40.820: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 13:40:40.820: INFO: update-demo-nautilus-5s5sk is verified up and running STEP: scaling up the replication controller May 16 13:40:40.822: INFO: scanned /root for discovery docs: May 16 13:40:40.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3624' May 16 13:40:41.959: INFO: stderr: "" May 16 13:40:41.959: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 16 13:40:41.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3624' May 16 13:40:42.050: INFO: stderr: "" May 16 13:40:42.050: INFO: stdout: "update-demo-nautilus-5s5sk update-demo-nautilus-9g5j8 " May 16 13:40:42.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5s5sk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3624' May 16 13:40:42.148: INFO: stderr: "" May 16 13:40:42.148: INFO: stdout: "true" May 16 13:40:42.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5s5sk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3624' May 16 13:40:42.239: INFO: stderr: "" May 16 13:40:42.239: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 13:40:42.239: INFO: validating pod update-demo-nautilus-5s5sk May 16 13:40:42.246: INFO: got data: { "image": "nautilus.jpg" } May 16 13:40:42.246: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 13:40:42.246: INFO: update-demo-nautilus-5s5sk is verified up and running May 16 13:40:42.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9g5j8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3624' May 16 13:40:42.341: INFO: stderr: "" May 16 13:40:42.341: INFO: stdout: "" May 16 13:40:42.341: INFO: update-demo-nautilus-9g5j8 is created but not running May 16 13:40:47.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3624' May 16 13:40:47.437: INFO: stderr: "" May 16 13:40:47.437: INFO: stdout: "update-demo-nautilus-5s5sk update-demo-nautilus-9g5j8 " May 16 13:40:47.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5s5sk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3624' May 16 13:40:47.534: INFO: stderr: "" May 16 13:40:47.534: INFO: stdout: "true" May 16 13:40:47.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5s5sk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3624' May 16 13:40:47.626: INFO: stderr: "" May 16 13:40:47.626: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 13:40:47.626: INFO: validating pod update-demo-nautilus-5s5sk May 16 13:40:47.629: INFO: got data: { "image": "nautilus.jpg" } May 16 13:40:47.629: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 13:40:47.629: INFO: update-demo-nautilus-5s5sk is verified up and running May 16 13:40:47.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9g5j8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3624' May 16 13:40:47.722: INFO: stderr: "" May 16 13:40:47.722: INFO: stdout: "true" May 16 13:40:47.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9g5j8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3624' May 16 13:40:47.809: INFO: stderr: "" May 16 13:40:47.809: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 13:40:47.810: INFO: validating pod update-demo-nautilus-9g5j8 May 16 13:40:47.813: INFO: got data: { "image": "nautilus.jpg" } May 16 13:40:47.813: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 13:40:47.813: INFO: update-demo-nautilus-9g5j8 is verified up and running STEP: using delete to clean up resources May 16 13:40:47.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3624' May 16 13:40:47.924: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 13:40:47.924: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 16 13:40:47.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3624' May 16 13:40:48.021: INFO: stderr: "No resources found.\n" May 16 13:40:48.021: INFO: stdout: "" May 16 13:40:48.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3624 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 16 13:40:48.127: INFO: stderr: "" May 16 13:40:48.127: INFO: stdout: "update-demo-nautilus-5s5sk\nupdate-demo-nautilus-9g5j8\n" May 16 13:40:48.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3624' May 16 13:40:48.734: INFO: stderr: "No resources found.\n" May 16 13:40:48.734: INFO: stdout: "" May 16 13:40:48.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3624 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 16 13:40:48.911: INFO: stderr: "" May 16 13:40:48.911: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:40:48.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3624" for this suite. May 16 13:41:10.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:41:11.067: INFO: namespace kubectl-3624 deletion completed in 22.151774958s • [SLOW TEST:43.122 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:41:11.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 16 13:41:11.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4879' May 16 13:41:11.255: INFO: stderr: "" May 16 13:41:11.255: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 May 16 13:41:11.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4879' May 16 13:41:21.857: INFO: stderr: "" May 16 13:41:21.857: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:41:21.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4879" for this suite. May 16 13:41:27.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:41:27.948: INFO: namespace kubectl-4879 deletion completed in 6.088515219s • [SLOW TEST:16.882 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:41:27.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 16 13:41:28.029: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 13:41:28.103: INFO: Number of nodes with available pods: 0 May 16 13:41:28.103: INFO: Node iruya-worker is running more than one daemon pod May 16 13:41:29.134: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 13:41:29.138: INFO: Number of nodes with available pods: 0 May 16 13:41:29.138: INFO: Node iruya-worker is running more than one daemon pod May 16 13:41:30.271: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 13:41:30.291: INFO: Number of nodes with available pods: 0 May 16 13:41:30.291: INFO: Node iruya-worker is running more than one daemon pod May 16 13:41:31.107: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 13:41:31.110: INFO: Number of nodes with available pods: 0 May 16 13:41:31.110: INFO: Node iruya-worker is running more than one daemon pod May 16 13:41:32.114: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 13:41:32.142: INFO: Number of nodes with available pods: 1 May 16 13:41:32.142: INFO: Node iruya-worker2 is running more than one daemon pod May 16 13:41:33.107: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 13:41:33.133: INFO: Number of nodes with available pods: 2 May 16 13:41:33.134: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 16 13:41:33.149: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 13:41:33.154: INFO: Number of nodes with available pods: 2 May 16 13:41:33.154: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8789, will wait for the garbage collector to delete the pods May 16 13:41:34.245: INFO: Deleting DaemonSet.extensions daemon-set took: 6.077306ms May 16 13:41:34.545: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.354588ms May 16 13:41:42.248: INFO: Number of nodes with available pods: 0 May 16 13:41:42.248: INFO: Number of running nodes: 0, number of available pods: 0 May 16 13:41:42.250: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8789/daemonsets","resourceVersion":"11220281"},"items":null} May 16 13:41:42.251: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8789/pods","resourceVersion":"11220281"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:41:42.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8789" for this suite. May 16 13:41:48.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:41:48.348: INFO: namespace daemonsets-8789 deletion completed in 6.085887239s • [SLOW TEST:20.399 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:41:48.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-5ca136f0-ef7d-4020-b2cd-59b65f702b89 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-5ca136f0-ef7d-4020-b2cd-59b65f702b89 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:43:14.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9998" for this suite. May 16 13:43:36.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:43:37.005: INFO: namespace projected-9998 deletion completed in 22.109245001s • [SLOW TEST:108.657 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:43:37.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-2aa7f828-c642-4839-acf8-7feafe6d92c1 STEP: Creating a pod to test consume secrets May 16 13:43:37.110: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a5516501-672c-424b-9133-0c4d1479ff27" in namespace "projected-5174" to be "success or failure" May 16 13:43:37.114: INFO: Pod "pod-projected-secrets-a5516501-672c-424b-9133-0c4d1479ff27": Phase="Pending", Reason="", readiness=false. Elapsed: 3.555663ms May 16 13:43:39.118: INFO: Pod "pod-projected-secrets-a5516501-672c-424b-9133-0c4d1479ff27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008156316s May 16 13:43:41.123: INFO: Pod "pod-projected-secrets-a5516501-672c-424b-9133-0c4d1479ff27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012939149s STEP: Saw pod success May 16 13:43:41.123: INFO: Pod "pod-projected-secrets-a5516501-672c-424b-9133-0c4d1479ff27" satisfied condition "success or failure" May 16 13:43:41.126: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-a5516501-672c-424b-9133-0c4d1479ff27 container secret-volume-test: STEP: delete the pod May 16 13:43:41.145: INFO: Waiting for pod pod-projected-secrets-a5516501-672c-424b-9133-0c4d1479ff27 to disappear May 16 13:43:41.150: INFO: Pod pod-projected-secrets-a5516501-672c-424b-9133-0c4d1479ff27 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:43:41.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5174" for this suite. May 16 13:43:47.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:43:47.248: INFO: namespace projected-5174 deletion completed in 6.094197759s • [SLOW TEST:10.243 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:43:47.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0516 13:44:17.860115 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 16 13:44:17.860: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:44:17.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8262" for this suite. May 16 13:44:23.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:44:24.134: INFO: namespace gc-8262 deletion completed in 6.271824955s • [SLOW TEST:36.886 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:44:24.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2411.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2411.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 16 13:44:30.243: INFO: DNS probes using dns-2411/dns-test-17b92b9b-6144-4c45-954c-410336e2f81b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:44:30.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2411" for this suite. May 16 13:44:36.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:44:36.413: INFO: namespace dns-2411 deletion completed in 6.109070832s • [SLOW TEST:12.278 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:44:36.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 16 13:44:36.497: INFO: Waiting up to 5m0s for pod "pod-cc4432c7-d93e-4c72-a920-8125dc80d4f0" in namespace "emptydir-4882" to be "success or failure" May 16 13:44:36.501: INFO: Pod "pod-cc4432c7-d93e-4c72-a920-8125dc80d4f0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.581701ms May 16 13:44:38.549: INFO: Pod "pod-cc4432c7-d93e-4c72-a920-8125dc80d4f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051983178s May 16 13:44:40.555: INFO: Pod "pod-cc4432c7-d93e-4c72-a920-8125dc80d4f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057576279s STEP: Saw pod success May 16 13:44:40.555: INFO: Pod "pod-cc4432c7-d93e-4c72-a920-8125dc80d4f0" satisfied condition "success or failure" May 16 13:44:40.557: INFO: Trying to get logs from node iruya-worker2 pod pod-cc4432c7-d93e-4c72-a920-8125dc80d4f0 container test-container: STEP: delete the pod May 16 13:44:40.574: INFO: Waiting for pod pod-cc4432c7-d93e-4c72-a920-8125dc80d4f0 to disappear May 16 13:44:40.584: INFO: Pod pod-cc4432c7-d93e-4c72-a920-8125dc80d4f0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:44:40.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4882" for this suite. May 16 13:44:46.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:44:46.677: INFO: namespace emptydir-4882 deletion completed in 6.089635541s • [SLOW TEST:10.264 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:44:46.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 16 13:44:46.743: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90f1d47d-15c4-4e80-ad8b-6569a2d1e0d9" in namespace "projected-4839" to be "success or failure" May 16 13:44:46.771: INFO: Pod "downwardapi-volume-90f1d47d-15c4-4e80-ad8b-6569a2d1e0d9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.614622ms May 16 13:44:48.776: INFO: Pod "downwardapi-volume-90f1d47d-15c4-4e80-ad8b-6569a2d1e0d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033249444s May 16 13:44:50.780: INFO: Pod "downwardapi-volume-90f1d47d-15c4-4e80-ad8b-6569a2d1e0d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037146241s STEP: Saw pod success May 16 13:44:50.780: INFO: Pod "downwardapi-volume-90f1d47d-15c4-4e80-ad8b-6569a2d1e0d9" satisfied condition "success or failure" May 16 13:44:50.783: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-90f1d47d-15c4-4e80-ad8b-6569a2d1e0d9 container client-container: STEP: delete the pod May 16 13:44:50.822: INFO: Waiting for pod downwardapi-volume-90f1d47d-15c4-4e80-ad8b-6569a2d1e0d9 to disappear May 16 13:44:50.900: INFO: Pod downwardapi-volume-90f1d47d-15c4-4e80-ad8b-6569a2d1e0d9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:44:50.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4839" for this suite. May 16 13:44:56.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:44:56.998: INFO: namespace projected-4839 deletion completed in 6.093829066s • [SLOW TEST:10.321 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:44:56.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode May 16 13:44:57.105: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5671" to be "success or failure" May 16 13:44:57.123: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.875676ms May 16 13:44:59.127: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021832915s May 16 13:45:01.131: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025875888s May 16 13:45:03.135: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030265263s STEP: Saw pod success May 16 13:45:03.136: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 16 13:45:03.138: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 16 13:45:03.175: INFO: Waiting for pod pod-host-path-test to disappear May 16 13:45:03.188: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:45:03.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5671" for this suite. May 16 13:45:09.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:45:09.275: INFO: namespace hostpath-5671 deletion completed in 6.083475317s • [SLOW TEST:12.277 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:45:09.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-1573bdb4-7328-49f1-8241-f61b14a1c22f STEP: Creating configMap with name cm-test-opt-upd-1398af7b-38fd-40ea-8423-c32ef4206441 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-1573bdb4-7328-49f1-8241-f61b14a1c22f STEP: Updating configmap cm-test-opt-upd-1398af7b-38fd-40ea-8423-c32ef4206441 STEP: Creating configMap with name cm-test-opt-create-fa361330-8203-4579-b762-5332c234903d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:45:17.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3051" for this suite. May 16 13:45:39.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:45:39.701: INFO: namespace projected-3051 deletion completed in 22.130580126s • [SLOW TEST:30.426 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:45:39.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-42pv7 in namespace proxy-6677 I0516 13:45:39.820340 6 runners.go:180] Created replication controller with name: proxy-service-42pv7, namespace: proxy-6677, replica count: 1 I0516 13:45:40.870751 6 runners.go:180] proxy-service-42pv7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 13:45:41.870976 6 runners.go:180] proxy-service-42pv7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 13:45:42.871187 6 runners.go:180] proxy-service-42pv7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 13:45:43.871386 6 runners.go:180] proxy-service-42pv7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0516 13:45:44.871559 6 runners.go:180] proxy-service-42pv7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0516 13:45:45.871780 6 runners.go:180] proxy-service-42pv7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 16 13:45:45.875: INFO: setup took 6.123564401s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 16 13:45:45.882: INFO: (0) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 6.816443ms) May 16 13:45:45.882: INFO: (0) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 6.77245ms) May 16 13:45:45.882: INFO: (0) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 6.839409ms) May 16 13:45:45.883: INFO: (0) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname2/proxy/: bar (200; 7.621159ms) May 16 13:45:45.883: INFO: (0) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 8.421652ms) May 16 13:45:45.884: INFO: (0) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname1/proxy/: foo (200; 8.997665ms) May 16 13:45:45.884: INFO: (0) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:1080/proxy/: ... (200; 9.142865ms) May 16 13:45:45.884: INFO: (0) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:1080/proxy/: test<... (200; 9.221086ms) May 16 13:45:45.884: INFO: (0) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl/proxy/: test (200; 9.345185ms) May 16 13:45:45.888: INFO: (0) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname2/proxy/: bar (200; 13.210373ms) May 16 13:45:45.888: INFO: (0) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname1/proxy/: foo (200; 13.340844ms) May 16 13:45:45.898: INFO: (0) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:460/proxy/: tls baz (200; 23.24668ms) May 16 13:45:45.898: INFO: (0) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:443/proxy/: test (200; 6.70005ms) May 16 13:45:45.905: INFO: (1) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:1080/proxy/: ... (200; 6.731171ms) May 16 13:45:45.905: INFO: (1) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 6.623647ms) May 16 13:45:45.905: INFO: (1) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:1080/proxy/: test<... (200; 6.81687ms) May 16 13:45:45.905: INFO: (1) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname1/proxy/: foo (200; 6.835635ms) May 16 13:45:45.905: INFO: (1) /api/v1/namespaces/proxy-6677/services/https:proxy-service-42pv7:tlsportname1/proxy/: tls baz (200; 6.871264ms) May 16 13:45:45.905: INFO: (1) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:460/proxy/: tls baz (200; 6.806947ms) May 16 13:45:45.905: INFO: (1) /api/v1/namespaces/proxy-6677/services/https:proxy-service-42pv7:tlsportname2/proxy/: tls qux (200; 6.848273ms) May 16 13:45:45.905: INFO: (1) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 6.892329ms) May 16 13:45:45.905: INFO: (1) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname2/proxy/: bar (200; 6.941553ms) May 16 13:45:45.906: INFO: (1) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname2/proxy/: bar (200; 7.120302ms) May 16 13:45:45.906: INFO: (1) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 7.074987ms) May 16 13:45:45.910: INFO: (2) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 3.96235ms) May 16 13:45:45.910: INFO: (2) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl/proxy/: test (200; 4.64972ms) May 16 13:45:45.911: INFO: (2) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:462/proxy/: tls qux (200; 5.06644ms) May 16 13:45:45.911: INFO: (2) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:1080/proxy/: ... (200; 5.12469ms) May 16 13:45:45.911: INFO: (2) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:1080/proxy/: test<... (200; 5.340003ms) May 16 13:45:45.911: INFO: (2) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 5.527681ms) May 16 13:45:45.912: INFO: (2) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname2/proxy/: bar (200; 6.009732ms) May 16 13:45:45.912: INFO: (2) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname1/proxy/: foo (200; 5.996098ms) May 16 13:45:45.912: INFO: (2) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 5.964967ms) May 16 13:45:45.912: INFO: (2) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:443/proxy/: ... (200; 4.236416ms) May 16 13:45:45.918: INFO: (3) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:460/proxy/: tls baz (200; 4.234658ms) May 16 13:45:45.917: INFO: (3) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:462/proxy/: tls qux (200; 4.172377ms) May 16 13:45:45.917: INFO: (3) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:1080/proxy/: test<... (200; 4.141462ms) May 16 13:45:45.918: INFO: (3) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 4.22179ms) May 16 13:45:45.917: INFO: (3) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl/proxy/: test (200; 4.201809ms) May 16 13:45:45.918: INFO: (3) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 4.284929ms) May 16 13:45:45.918: INFO: (3) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname1/proxy/: foo (200; 5.195374ms) May 16 13:45:45.919: INFO: (3) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname2/proxy/: bar (200; 5.496758ms) May 16 13:45:45.919: INFO: (3) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname2/proxy/: bar (200; 5.591577ms) May 16 13:45:45.919: INFO: (3) /api/v1/namespaces/proxy-6677/services/https:proxy-service-42pv7:tlsportname2/proxy/: tls qux (200; 5.568345ms) May 16 13:45:45.919: INFO: (3) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname1/proxy/: foo (200; 5.603263ms) May 16 13:45:45.919: INFO: (3) /api/v1/namespaces/proxy-6677/services/https:proxy-service-42pv7:tlsportname1/proxy/: tls baz (200; 5.469957ms) May 16 13:45:45.922: INFO: (4) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:462/proxy/: tls qux (200; 3.272612ms) May 16 13:45:45.922: INFO: (4) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 3.580852ms) May 16 13:45:45.923: INFO: (4) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:1080/proxy/: ... (200; 3.615354ms) May 16 13:45:45.923: INFO: (4) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:460/proxy/: tls baz (200; 3.628236ms) May 16 13:45:45.923: INFO: (4) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl/proxy/: test (200; 3.590925ms) May 16 13:45:45.923: INFO: (4) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 3.896232ms) May 16 13:45:45.923: INFO: (4) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:1080/proxy/: test<... (200; 3.840298ms) May 16 13:45:45.923: INFO: (4) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 3.836404ms) May 16 13:45:45.923: INFO: (4) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 4.012894ms) May 16 13:45:45.923: INFO: (4) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:443/proxy/: test<... (200; 3.057162ms) May 16 13:45:45.930: INFO: (5) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname1/proxy/: foo (200; 4.962491ms) May 16 13:45:45.930: INFO: (5) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl/proxy/: test (200; 4.877592ms) May 16 13:45:45.930: INFO: (5) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:1080/proxy/: ... (200; 4.875389ms) May 16 13:45:45.930: INFO: (5) /api/v1/namespaces/proxy-6677/services/https:proxy-service-42pv7:tlsportname1/proxy/: tls baz (200; 5.119748ms) May 16 13:45:45.930: INFO: (5) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname2/proxy/: bar (200; 5.13758ms) May 16 13:45:45.930: INFO: (5) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:443/proxy/: test (200; 3.231148ms) May 16 13:45:45.935: INFO: (6) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:1080/proxy/: test<... (200; 3.761608ms) May 16 13:45:45.936: INFO: (6) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 4.165191ms) May 16 13:45:45.936: INFO: (6) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:1080/proxy/: ... (200; 4.307953ms) May 16 13:45:45.936: INFO: (6) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:462/proxy/: tls qux (200; 4.185364ms) May 16 13:45:45.936: INFO: (6) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 4.297654ms) May 16 13:45:45.936: INFO: (6) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:443/proxy/: ... (200; 4.025117ms) May 16 13:45:45.942: INFO: (7) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:443/proxy/: test<... (200; 4.467283ms) May 16 13:45:45.942: INFO: (7) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 4.531933ms) May 16 13:45:45.942: INFO: (7) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:460/proxy/: tls baz (200; 4.583128ms) May 16 13:45:45.942: INFO: (7) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:462/proxy/: tls qux (200; 4.576321ms) May 16 13:45:45.942: INFO: (7) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl/proxy/: test (200; 4.713016ms) May 16 13:45:45.942: INFO: (7) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 4.634401ms) May 16 13:45:45.943: INFO: (7) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname1/proxy/: foo (200; 5.164378ms) May 16 13:45:45.943: INFO: (7) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 5.233029ms) May 16 13:45:45.943: INFO: (7) /api/v1/namespaces/proxy-6677/services/https:proxy-service-42pv7:tlsportname1/proxy/: tls baz (200; 5.398765ms) May 16 13:45:45.943: INFO: (7) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname2/proxy/: bar (200; 5.54424ms) May 16 13:45:45.943: INFO: (7) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname1/proxy/: foo (200; 5.52984ms) May 16 13:45:45.943: INFO: (7) /api/v1/namespaces/proxy-6677/services/https:proxy-service-42pv7:tlsportname2/proxy/: tls qux (200; 5.515257ms) May 16 13:45:45.943: INFO: (7) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname2/proxy/: bar (200; 5.609114ms) May 16 13:45:45.948: INFO: (8) /api/v1/namespaces/proxy-6677/services/https:proxy-service-42pv7:tlsportname2/proxy/: tls qux (200; 4.560762ms) May 16 13:45:45.948: INFO: (8) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname1/proxy/: foo (200; 4.735726ms) May 16 13:45:45.948: INFO: (8) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname1/proxy/: foo (200; 4.797294ms) May 16 13:45:45.948: INFO: (8) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl/proxy/: test (200; 4.741674ms) May 16 13:45:45.948: INFO: (8) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname2/proxy/: bar (200; 4.837436ms) May 16 13:45:45.948: INFO: (8) /api/v1/namespaces/proxy-6677/services/https:proxy-service-42pv7:tlsportname1/proxy/: tls baz (200; 4.940836ms) May 16 13:45:45.948: INFO: (8) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname2/proxy/: bar (200; 4.852187ms) May 16 13:45:45.949: INFO: (8) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:1080/proxy/: ... (200; 5.17521ms) May 16 13:45:45.949: INFO: (8) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:443/proxy/: test<... (200; 5.84162ms) May 16 13:45:45.949: INFO: (8) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 5.788255ms) May 16 13:45:45.949: INFO: (8) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:462/proxy/: tls qux (200; 5.78928ms) May 16 13:45:45.949: INFO: (8) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:460/proxy/: tls baz (200; 5.777932ms) May 16 13:45:45.952: INFO: (9) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:1080/proxy/: test<... (200; 2.578886ms) May 16 13:45:45.955: INFO: (9) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname1/proxy/: foo (200; 5.410236ms) May 16 13:45:45.955: INFO: (9) /api/v1/namespaces/proxy-6677/services/https:proxy-service-42pv7:tlsportname1/proxy/: tls baz (200; 5.388681ms) May 16 13:45:45.955: INFO: (9) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 6.145058ms) May 16 13:45:45.956: INFO: (9) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:1080/proxy/: ... (200; 6.235066ms) May 16 13:45:45.956: INFO: (9) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname2/proxy/: bar (200; 6.281164ms) May 16 13:45:45.956: INFO: (9) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl/proxy/: test (200; 6.456056ms) May 16 13:45:45.956: INFO: (9) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 6.455551ms) May 16 13:45:45.956: INFO: (9) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:462/proxy/: tls qux (200; 6.977164ms) May 16 13:45:45.956: INFO: (9) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 6.940898ms) May 16 13:45:45.956: INFO: (9) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname1/proxy/: foo (200; 6.949632ms) May 16 13:45:45.956: INFO: (9) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 7.213205ms) May 16 13:45:45.957: INFO: (9) /api/v1/namespaces/proxy-6677/services/https:proxy-service-42pv7:tlsportname2/proxy/: tls qux (200; 7.857246ms) May 16 13:45:45.957: INFO: (9) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:443/proxy/: test<... (200; 6.343865ms) May 16 13:45:45.964: INFO: (10) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:1080/proxy/: ... (200; 6.365381ms) May 16 13:45:45.964: INFO: (10) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 6.432572ms) May 16 13:45:45.964: INFO: (10) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:460/proxy/: tls baz (200; 6.435693ms) May 16 13:45:45.964: INFO: (10) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl/proxy/: test (200; 6.356768ms) May 16 13:45:45.964: INFO: (10) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:443/proxy/: test<... (200; 3.167595ms) May 16 13:45:45.968: INFO: (11) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 2.970764ms) May 16 13:45:45.968: INFO: (11) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:460/proxy/: tls baz (200; 3.618249ms) May 16 13:45:45.968: INFO: (11) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:443/proxy/: test (200; 3.722374ms) May 16 13:45:45.968: INFO: (11) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 3.863175ms) May 16 13:45:45.968: INFO: (11) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:1080/proxy/: ... (200; 3.989052ms) May 16 13:45:45.969: INFO: (11) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 4.294103ms) May 16 13:45:45.969: INFO: (11) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname2/proxy/: bar (200; 4.690633ms) May 16 13:45:45.969: INFO: (11) /api/v1/namespaces/proxy-6677/services/https:proxy-service-42pv7:tlsportname2/proxy/: tls qux (200; 4.552687ms) May 16 13:45:45.969: INFO: (11) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:462/proxy/: tls qux (200; 4.902346ms) May 16 13:45:45.969: INFO: (11) /api/v1/namespaces/proxy-6677/services/https:proxy-service-42pv7:tlsportname1/proxy/: tls baz (200; 4.700279ms) May 16 13:45:45.969: INFO: (11) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname1/proxy/: foo (200; 4.855785ms) May 16 13:45:45.969: INFO: (11) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname2/proxy/: bar (200; 4.591958ms) May 16 13:45:45.969: INFO: (11) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 4.907303ms) May 16 13:45:45.969: INFO: (11) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname1/proxy/: foo (200; 4.923454ms) May 16 13:45:45.974: INFO: (12) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:1080/proxy/: ... (200; 4.594151ms) May 16 13:45:45.974: INFO: (12) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl/proxy/: test (200; 4.59749ms) May 16 13:45:45.974: INFO: (12) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 4.887992ms) May 16 13:45:45.975: INFO: (12) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 4.8376ms) May 16 13:45:45.975: INFO: (12) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:1080/proxy/: test<... (200; 4.988461ms) May 16 13:45:45.975: INFO: (12) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname2/proxy/: bar (200; 4.83338ms) May 16 13:45:45.975: INFO: (12) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:462/proxy/: tls qux (200; 4.959955ms) May 16 13:45:45.975: INFO: (12) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname1/proxy/: foo (200; 5.041104ms) May 16 13:45:45.975: INFO: (12) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 5.026467ms) May 16 13:45:45.975: INFO: (12) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 5.113558ms) May 16 13:45:45.975: INFO: (12) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname1/proxy/: foo (200; 5.355883ms) May 16 13:45:45.975: INFO: (12) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:443/proxy/: test (200; 5.481442ms) May 16 13:45:45.981: INFO: (13) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname1/proxy/: foo (200; 5.495356ms) May 16 13:45:45.981: INFO: (13) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:443/proxy/: ... (200; 5.538829ms) May 16 13:45:45.981: INFO: (13) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:1080/proxy/: test<... (200; 5.542665ms) May 16 13:45:45.981: INFO: (13) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:460/proxy/: tls baz (200; 5.485651ms) May 16 13:45:45.981: INFO: (13) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname2/proxy/: bar (200; 5.477256ms) May 16 13:45:45.981: INFO: (13) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname1/proxy/: foo (200; 5.758049ms) May 16 13:45:45.984: INFO: (14) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl/proxy/: test (200; 2.975506ms) May 16 13:45:45.984: INFO: (14) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 3.059004ms) May 16 13:45:45.984: INFO: (14) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 3.017711ms) May 16 13:45:45.985: INFO: (14) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:1080/proxy/: test<... (200; 3.092403ms) May 16 13:45:45.985: INFO: (14) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:1080/proxy/: ... (200; 3.393266ms) May 16 13:45:45.985: INFO: (14) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:460/proxy/: tls baz (200; 3.557274ms) May 16 13:45:45.985: INFO: (14) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:443/proxy/: ... (200; 3.845455ms) May 16 13:45:45.990: INFO: (15) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 3.864916ms) May 16 13:45:45.990: INFO: (15) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 3.975222ms) May 16 13:45:45.991: INFO: (15) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 4.012922ms) May 16 13:45:45.991: INFO: (15) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 4.00079ms) May 16 13:45:45.991: INFO: (15) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl/proxy/: test (200; 4.237355ms) May 16 13:45:45.991: INFO: (15) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:443/proxy/: test<... (200; 4.341026ms) May 16 13:45:45.991: INFO: (15) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:462/proxy/: tls qux (200; 4.371391ms) May 16 13:45:45.991: INFO: (15) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname1/proxy/: foo (200; 4.672287ms) May 16 13:45:45.991: INFO: (15) /api/v1/namespaces/proxy-6677/services/https:proxy-service-42pv7:tlsportname2/proxy/: tls qux (200; 4.928296ms) May 16 13:45:45.994: INFO: (16) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl/proxy/: test (200; 2.913913ms) May 16 13:45:45.995: INFO: (16) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:462/proxy/: tls qux (200; 3.349364ms) May 16 13:45:45.995: INFO: (16) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 3.314905ms) May 16 13:45:45.995: INFO: (16) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:1080/proxy/: test<... (200; 3.303144ms) May 16 13:45:45.995: INFO: (16) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:443/proxy/: ... (200; 3.314719ms) May 16 13:45:45.995: INFO: (16) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 3.841014ms) May 16 13:45:45.995: INFO: (16) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname2/proxy/: bar (200; 3.88382ms) May 16 13:45:45.995: INFO: (16) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 3.861978ms) May 16 13:45:45.996: INFO: (16) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname1/proxy/: foo (200; 3.993778ms) May 16 13:45:45.996: INFO: (16) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname2/proxy/: bar (200; 4.189812ms) May 16 13:45:45.996: INFO: (16) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname1/proxy/: foo (200; 4.138461ms) May 16 13:45:45.996: INFO: (16) /api/v1/namespaces/proxy-6677/services/https:proxy-service-42pv7:tlsportname2/proxy/: tls qux (200; 4.358887ms) May 16 13:45:45.996: INFO: (16) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 4.465381ms) May 16 13:45:45.996: INFO: (16) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:460/proxy/: tls baz (200; 4.405335ms) May 16 13:45:45.996: INFO: (16) /api/v1/namespaces/proxy-6677/services/https:proxy-service-42pv7:tlsportname1/proxy/: tls baz (200; 4.596452ms) May 16 13:45:45.998: INFO: (17) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 2.184915ms) May 16 13:45:45.999: INFO: (17) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:1080/proxy/: test<... (200; 2.246506ms) May 16 13:45:45.999: INFO: (17) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 2.268174ms) May 16 13:45:45.999: INFO: (17) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl/proxy/: test (200; 2.854142ms) May 16 13:45:45.999: INFO: (17) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:462/proxy/: tls qux (200; 2.912759ms) May 16 13:45:46.000: INFO: (17) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 3.470095ms) May 16 13:45:46.000: INFO: (17) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:1080/proxy/: ... (200; 3.479233ms) May 16 13:45:46.000: INFO: (17) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 3.557037ms) May 16 13:45:46.000: INFO: (17) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:460/proxy/: tls baz (200; 3.613316ms) May 16 13:45:46.000: INFO: (17) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname2/proxy/: bar (200; 3.57252ms) May 16 13:45:46.000: INFO: (17) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:443/proxy/: test (200; 4.280012ms) May 16 13:45:46.005: INFO: (18) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 4.138652ms) May 16 13:45:46.005: INFO: (18) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:1080/proxy/: test<... (200; 4.32272ms) May 16 13:45:46.005: INFO: (18) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 4.310607ms) May 16 13:45:46.005: INFO: (18) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:1080/proxy/: ... (200; 4.092871ms) May 16 13:45:46.005: INFO: (18) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 4.151894ms) May 16 13:45:46.005: INFO: (18) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:460/proxy/: tls baz (200; 4.727947ms) May 16 13:45:46.009: INFO: (19) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname1/proxy/: foo (200; 4.056745ms) May 16 13:45:46.010: INFO: (19) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:1080/proxy/: ... (200; 4.127939ms) May 16 13:45:46.010: INFO: (19) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:462/proxy/: tls qux (200; 4.276509ms) May 16 13:45:46.010: INFO: (19) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl/proxy/: test (200; 4.354404ms) May 16 13:45:46.010: INFO: (19) /api/v1/namespaces/proxy-6677/services/https:proxy-service-42pv7:tlsportname1/proxy/: tls baz (200; 4.613269ms) May 16 13:45:46.010: INFO: (19) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:460/proxy/: tls baz (200; 4.599634ms) May 16 13:45:46.010: INFO: (19) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 4.625966ms) May 16 13:45:46.010: INFO: (19) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:162/proxy/: bar (200; 4.556207ms) May 16 13:45:46.010: INFO: (19) /api/v1/namespaces/proxy-6677/services/https:proxy-service-42pv7:tlsportname2/proxy/: tls qux (200; 4.777108ms) May 16 13:45:46.010: INFO: (19) /api/v1/namespaces/proxy-6677/pods/http:proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 4.977013ms) May 16 13:45:46.011: INFO: (19) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:1080/proxy/: test<... (200; 5.302843ms) May 16 13:45:46.011: INFO: (19) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname1/proxy/: foo (200; 5.259848ms) May 16 13:45:46.011: INFO: (19) /api/v1/namespaces/proxy-6677/services/http:proxy-service-42pv7:portname2/proxy/: bar (200; 5.292278ms) May 16 13:45:46.011: INFO: (19) /api/v1/namespaces/proxy-6677/pods/proxy-service-42pv7-8gdgl:160/proxy/: foo (200; 5.312975ms) May 16 13:45:46.011: INFO: (19) /api/v1/namespaces/proxy-6677/services/proxy-service-42pv7:portname2/proxy/: bar (200; 5.35284ms) May 16 13:45:46.011: INFO: (19) /api/v1/namespaces/proxy-6677/pods/https:proxy-service-42pv7-8gdgl:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 16 13:45:58.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9424' May 16 13:45:58.238: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 16 13:45:58.238: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 May 16 13:45:58.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-9424' May 16 13:45:58.354: INFO: stderr: "" May 16 13:45:58.354: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:45:58.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9424" for this suite. May 16 13:46:04.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:46:04.461: INFO: namespace kubectl-9424 deletion completed in 6.104475208s • [SLOW TEST:6.467 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:46:04.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 16 13:46:04.507: INFO: PodSpec: initContainers in spec.initContainers May 16 13:46:58.868: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-8a5411d7-9adb-442c-b08a-462102ef9ae0", GenerateName:"", Namespace:"init-container-2824", SelfLink:"/api/v1/namespaces/init-container-2824/pods/pod-init-8a5411d7-9adb-442c-b08a-462102ef9ae0", UID:"17febc64-43ff-4132-9552-09d712f6f16d", ResourceVersion:"11221287", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725233564, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"507652668"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-bnbwp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002a2fd80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bnbwp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bnbwp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bnbwp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002b7f838), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002dbf800), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002b7f8c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002b7f8e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002b7f8e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002b7f8ec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725233564, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725233564, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725233564, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725233564, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.193", StartTime:(*v1.Time)(0xc002000c00), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000be3730)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000be37a0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://3fb85da2fbb4773f6b90dfdfbfa5aa940648423ee5933fd2d4ffc40a29d0a772"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002000c40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002000c20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:46:58.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2824" for this suite. May 16 13:47:20.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:47:21.058: INFO: namespace init-container-2824 deletion completed in 22.09886505s • [SLOW TEST:76.597 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:47:21.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-549e49ae-5f4a-4b55-ab24-d0d170b07a25 May 16 13:47:21.204: INFO: Pod name my-hostname-basic-549e49ae-5f4a-4b55-ab24-d0d170b07a25: Found 0 pods out of 1 May 16 13:47:26.223: INFO: Pod name my-hostname-basic-549e49ae-5f4a-4b55-ab24-d0d170b07a25: Found 1 pods out of 1 May 16 13:47:26.223: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-549e49ae-5f4a-4b55-ab24-d0d170b07a25" are running May 16 13:47:26.226: INFO: Pod "my-hostname-basic-549e49ae-5f4a-4b55-ab24-d0d170b07a25-6qkqb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 13:47:21 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 13:47:24 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 13:47:24 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 13:47:21 +0000 UTC Reason: Message:}]) May 16 13:47:26.226: INFO: Trying to dial the pod May 16 13:47:31.238: INFO: Controller my-hostname-basic-549e49ae-5f4a-4b55-ab24-d0d170b07a25: Got expected result from replica 1 [my-hostname-basic-549e49ae-5f4a-4b55-ab24-d0d170b07a25-6qkqb]: "my-hostname-basic-549e49ae-5f4a-4b55-ab24-d0d170b07a25-6qkqb", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:47:31.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1516" for this suite. May 16 13:47:37.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:47:37.336: INFO: namespace replication-controller-1516 deletion completed in 6.094100727s • [SLOW TEST:16.277 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:47:37.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 16 13:47:37.457: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3369f409-41f2-4fec-b99b-d5bda9118446" in namespace "downward-api-8209" to be "success or failure" May 16 13:47:37.479: INFO: Pod "downwardapi-volume-3369f409-41f2-4fec-b99b-d5bda9118446": Phase="Pending", Reason="", readiness=false. Elapsed: 21.825905ms May 16 13:47:39.546: INFO: Pod "downwardapi-volume-3369f409-41f2-4fec-b99b-d5bda9118446": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089191991s May 16 13:47:41.551: INFO: Pod "downwardapi-volume-3369f409-41f2-4fec-b99b-d5bda9118446": Phase="Running", Reason="", readiness=true. Elapsed: 4.093888025s May 16 13:47:43.556: INFO: Pod "downwardapi-volume-3369f409-41f2-4fec-b99b-d5bda9118446": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.098587079s STEP: Saw pod success May 16 13:47:43.556: INFO: Pod "downwardapi-volume-3369f409-41f2-4fec-b99b-d5bda9118446" satisfied condition "success or failure" May 16 13:47:43.559: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-3369f409-41f2-4fec-b99b-d5bda9118446 container client-container: STEP: delete the pod May 16 13:47:43.620: INFO: Waiting for pod downwardapi-volume-3369f409-41f2-4fec-b99b-d5bda9118446 to disappear May 16 13:47:43.629: INFO: Pod downwardapi-volume-3369f409-41f2-4fec-b99b-d5bda9118446 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:47:43.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8209" for this suite. May 16 13:47:49.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:47:49.723: INFO: namespace downward-api-8209 deletion completed in 6.091193585s • [SLOW TEST:12.387 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:47:49.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 13:47:49.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 16 13:47:49.889: INFO: stderr: "" May 16 13:47:49.889: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:47:49.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6846" for this suite. May 16 13:47:55.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:47:56.003: INFO: namespace kubectl-6846 deletion completed in 6.096928716s • [SLOW TEST:6.279 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:47:56.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1127 STEP: creating a selector STEP: Creating the service pods in kubernetes May 16 13:47:56.063: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 16 13:48:18.192: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.12:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1127 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 13:48:18.192: INFO: >>> kubeConfig: /root/.kube/config I0516 13:48:18.230786 6 log.go:172] (0xc002b71080) (0xc001376140) Create stream I0516 13:48:18.230814 6 log.go:172] (0xc002b71080) (0xc001376140) Stream added, broadcasting: 1 I0516 13:48:18.232911 6 log.go:172] (0xc002b71080) Reply frame received for 1 I0516 13:48:18.232952 6 log.go:172] (0xc002b71080) (0xc002408aa0) Create stream I0516 13:48:18.232969 6 log.go:172] (0xc002b71080) (0xc002408aa0) Stream added, broadcasting: 3 I0516 13:48:18.234188 6 log.go:172] (0xc002b71080) Reply frame received for 3 I0516 13:48:18.234218 6 log.go:172] (0xc002b71080) (0xc002408be0) Create stream I0516 13:48:18.234230 6 log.go:172] (0xc002b71080) (0xc002408be0) Stream added, broadcasting: 5 I0516 13:48:18.235178 6 log.go:172] (0xc002b71080) Reply frame received for 5 I0516 13:48:18.301842 6 log.go:172] (0xc002b71080) Data frame received for 3 I0516 13:48:18.301876 6 log.go:172] (0xc002408aa0) (3) Data frame handling I0516 13:48:18.301898 6 log.go:172] (0xc002408aa0) (3) Data frame sent I0516 13:48:18.301917 6 log.go:172] (0xc002b71080) Data frame received for 3 I0516 13:48:18.301976 6 log.go:172] (0xc002408aa0) (3) Data frame handling I0516 13:48:18.302415 6 log.go:172] (0xc002b71080) Data frame received for 5 I0516 13:48:18.302446 6 log.go:172] (0xc002408be0) (5) Data frame handling I0516 13:48:18.303606 6 log.go:172] (0xc002b71080) Data frame received for 1 I0516 13:48:18.303622 6 log.go:172] (0xc001376140) (1) Data frame handling I0516 13:48:18.303632 6 log.go:172] (0xc001376140) (1) Data frame sent I0516 13:48:18.303649 6 log.go:172] (0xc002b71080) (0xc001376140) Stream removed, broadcasting: 1 I0516 13:48:18.303737 6 log.go:172] (0xc002b71080) (0xc001376140) Stream removed, broadcasting: 1 I0516 13:48:18.303748 6 log.go:172] (0xc002b71080) (0xc002408aa0) Stream removed, broadcasting: 3 I0516 13:48:18.303853 6 log.go:172] (0xc002b71080) (0xc002408be0) Stream removed, broadcasting: 5 I0516 13:48:18.304007 6 log.go:172] (0xc002b71080) Go away received May 16 13:48:18.304: INFO: Found all expected endpoints: [netserver-0] May 16 13:48:18.307: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.195:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1127 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 13:48:18.307: INFO: >>> kubeConfig: /root/.kube/config I0516 13:48:18.338812 6 log.go:172] (0xc001270dc0) (0xc0028e4dc0) Create stream I0516 13:48:18.338855 6 log.go:172] (0xc001270dc0) (0xc0028e4dc0) Stream added, broadcasting: 1 I0516 13:48:18.340736 6 log.go:172] (0xc001270dc0) Reply frame received for 1 I0516 13:48:18.340766 6 log.go:172] (0xc001270dc0) (0xc0029e95e0) Create stream I0516 13:48:18.340782 6 log.go:172] (0xc001270dc0) (0xc0029e95e0) Stream added, broadcasting: 3 I0516 13:48:18.342265 6 log.go:172] (0xc001270dc0) Reply frame received for 3 I0516 13:48:18.342298 6 log.go:172] (0xc001270dc0) (0xc0028e4e60) Create stream I0516 13:48:18.342310 6 log.go:172] (0xc001270dc0) (0xc0028e4e60) Stream added, broadcasting: 5 I0516 13:48:18.343319 6 log.go:172] (0xc001270dc0) Reply frame received for 5 I0516 13:48:18.408906 6 log.go:172] (0xc001270dc0) Data frame received for 5 I0516 13:48:18.408952 6 log.go:172] (0xc0028e4e60) (5) Data frame handling I0516 13:48:18.408987 6 log.go:172] (0xc001270dc0) Data frame received for 3 I0516 13:48:18.409004 6 log.go:172] (0xc0029e95e0) (3) Data frame handling I0516 13:48:18.409018 6 log.go:172] (0xc0029e95e0) (3) Data frame sent I0516 13:48:18.409034 6 log.go:172] (0xc001270dc0) Data frame received for 3 I0516 13:48:18.409049 6 log.go:172] (0xc0029e95e0) (3) Data frame handling I0516 13:48:18.410755 6 log.go:172] (0xc001270dc0) Data frame received for 1 I0516 13:48:18.410778 6 log.go:172] (0xc0028e4dc0) (1) Data frame handling I0516 13:48:18.410827 6 log.go:172] (0xc0028e4dc0) (1) Data frame sent I0516 13:48:18.410855 6 log.go:172] (0xc001270dc0) (0xc0028e4dc0) Stream removed, broadcasting: 1 I0516 13:48:18.410920 6 log.go:172] (0xc001270dc0) Go away received I0516 13:48:18.410972 6 log.go:172] (0xc001270dc0) (0xc0028e4dc0) Stream removed, broadcasting: 1 I0516 13:48:18.411005 6 log.go:172] (0xc001270dc0) (0xc0029e95e0) Stream removed, broadcasting: 3 I0516 13:48:18.411021 6 log.go:172] (0xc001270dc0) (0xc0028e4e60) Stream removed, broadcasting: 5 May 16 13:48:18.411: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:48:18.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1127" for this suite. May 16 13:48:40.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:48:40.502: INFO: namespace pod-network-test-1127 deletion completed in 22.087194944s • [SLOW TEST:44.499 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:48:40.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 16 13:48:40.602: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb7c168b-5789-4c44-a44b-66bd92a5c6c0" in namespace "downward-api-271" to be "success or failure" May 16 13:48:40.612: INFO: Pod "downwardapi-volume-eb7c168b-5789-4c44-a44b-66bd92a5c6c0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.780535ms May 16 13:48:42.616: INFO: Pod "downwardapi-volume-eb7c168b-5789-4c44-a44b-66bd92a5c6c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013442568s May 16 13:48:44.620: INFO: Pod "downwardapi-volume-eb7c168b-5789-4c44-a44b-66bd92a5c6c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017697097s STEP: Saw pod success May 16 13:48:44.620: INFO: Pod "downwardapi-volume-eb7c168b-5789-4c44-a44b-66bd92a5c6c0" satisfied condition "success or failure" May 16 13:48:44.623: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-eb7c168b-5789-4c44-a44b-66bd92a5c6c0 container client-container: STEP: delete the pod May 16 13:48:44.664: INFO: Waiting for pod downwardapi-volume-eb7c168b-5789-4c44-a44b-66bd92a5c6c0 to disappear May 16 13:48:44.667: INFO: Pod downwardapi-volume-eb7c168b-5789-4c44-a44b-66bd92a5c6c0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:48:44.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-271" for this suite. May 16 13:48:50.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:48:50.817: INFO: namespace downward-api-271 deletion completed in 6.146569049s • [SLOW TEST:10.316 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:48:50.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 16 13:48:50.866: INFO: Waiting up to 5m0s for pod "pod-5cddc57a-20bb-4138-b6f1-16350e0db508" in namespace "emptydir-2443" to be "success or failure" May 16 13:48:50.882: INFO: Pod "pod-5cddc57a-20bb-4138-b6f1-16350e0db508": Phase="Pending", Reason="", readiness=false. Elapsed: 16.046243ms May 16 13:48:52.930: INFO: Pod "pod-5cddc57a-20bb-4138-b6f1-16350e0db508": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064522298s May 16 13:48:54.933: INFO: Pod "pod-5cddc57a-20bb-4138-b6f1-16350e0db508": Phase="Running", Reason="", readiness=true. Elapsed: 4.067180786s May 16 13:48:56.937: INFO: Pod "pod-5cddc57a-20bb-4138-b6f1-16350e0db508": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.071728338s STEP: Saw pod success May 16 13:48:56.938: INFO: Pod "pod-5cddc57a-20bb-4138-b6f1-16350e0db508" satisfied condition "success or failure" May 16 13:48:56.941: INFO: Trying to get logs from node iruya-worker2 pod pod-5cddc57a-20bb-4138-b6f1-16350e0db508 container test-container: STEP: delete the pod May 16 13:48:57.004: INFO: Waiting for pod pod-5cddc57a-20bb-4138-b6f1-16350e0db508 to disappear May 16 13:48:57.014: INFO: Pod pod-5cddc57a-20bb-4138-b6f1-16350e0db508 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:48:57.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2443" for this suite. May 16 13:49:03.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:49:03.144: INFO: namespace emptydir-2443 deletion completed in 6.126948602s • [SLOW TEST:12.327 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:49:03.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 16 13:49:03.238: INFO: Waiting up to 5m0s for pod "downwardapi-volume-972f0cbe-20ff-4e77-a408-2af193634240" in namespace "projected-4062" to be "success or failure" May 16 13:49:03.254: INFO: Pod "downwardapi-volume-972f0cbe-20ff-4e77-a408-2af193634240": Phase="Pending", Reason="", readiness=false. Elapsed: 15.874517ms May 16 13:49:05.258: INFO: Pod "downwardapi-volume-972f0cbe-20ff-4e77-a408-2af193634240": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0202453s May 16 13:49:07.262: INFO: Pod "downwardapi-volume-972f0cbe-20ff-4e77-a408-2af193634240": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024652355s STEP: Saw pod success May 16 13:49:07.262: INFO: Pod "downwardapi-volume-972f0cbe-20ff-4e77-a408-2af193634240" satisfied condition "success or failure" May 16 13:49:07.266: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-972f0cbe-20ff-4e77-a408-2af193634240 container client-container: STEP: delete the pod May 16 13:49:07.316: INFO: Waiting for pod downwardapi-volume-972f0cbe-20ff-4e77-a408-2af193634240 to disappear May 16 13:49:07.410: INFO: Pod downwardapi-volume-972f0cbe-20ff-4e77-a408-2af193634240 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:49:07.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4062" for this suite. May 16 13:49:13.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:49:13.534: INFO: namespace projected-4062 deletion completed in 6.120575825s • [SLOW TEST:10.390 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:49:13.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 16 13:49:13.603: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13e30973-dfbe-4d2b-8719-6b0d450be741" in namespace "downward-api-577" to be "success or failure" May 16 13:49:13.607: INFO: Pod "downwardapi-volume-13e30973-dfbe-4d2b-8719-6b0d450be741": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031093ms May 16 13:49:15.631: INFO: Pod "downwardapi-volume-13e30973-dfbe-4d2b-8719-6b0d450be741": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028308369s May 16 13:49:17.636: INFO: Pod "downwardapi-volume-13e30973-dfbe-4d2b-8719-6b0d450be741": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033005389s STEP: Saw pod success May 16 13:49:17.636: INFO: Pod "downwardapi-volume-13e30973-dfbe-4d2b-8719-6b0d450be741" satisfied condition "success or failure" May 16 13:49:17.639: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-13e30973-dfbe-4d2b-8719-6b0d450be741 container client-container: STEP: delete the pod May 16 13:49:17.656: INFO: Waiting for pod downwardapi-volume-13e30973-dfbe-4d2b-8719-6b0d450be741 to disappear May 16 13:49:17.673: INFO: Pod downwardapi-volume-13e30973-dfbe-4d2b-8719-6b0d450be741 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:49:17.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-577" for this suite. May 16 13:49:23.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:49:23.822: INFO: namespace downward-api-577 deletion completed in 6.146532733s • [SLOW TEST:10.286 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:49:23.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 16 13:49:23.927: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ca74953-b8ae-4680-9f39-cf7f95f507b7" in namespace "downward-api-799" to be "success or failure" May 16 13:49:23.949: INFO: Pod "downwardapi-volume-0ca74953-b8ae-4680-9f39-cf7f95f507b7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.789426ms May 16 13:49:25.953: INFO: Pod "downwardapi-volume-0ca74953-b8ae-4680-9f39-cf7f95f507b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025535208s May 16 13:49:27.963: INFO: Pod "downwardapi-volume-0ca74953-b8ae-4680-9f39-cf7f95f507b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035742775s STEP: Saw pod success May 16 13:49:27.963: INFO: Pod "downwardapi-volume-0ca74953-b8ae-4680-9f39-cf7f95f507b7" satisfied condition "success or failure" May 16 13:49:27.965: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-0ca74953-b8ae-4680-9f39-cf7f95f507b7 container client-container: STEP: delete the pod May 16 13:49:27.986: INFO: Waiting for pod downwardapi-volume-0ca74953-b8ae-4680-9f39-cf7f95f507b7 to disappear May 16 13:49:28.021: INFO: Pod downwardapi-volume-0ca74953-b8ae-4680-9f39-cf7f95f507b7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:49:28.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-799" for this suite. May 16 13:49:34.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:49:34.128: INFO: namespace downward-api-799 deletion completed in 6.10171896s • [SLOW TEST:10.306 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:49:34.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 16 13:49:34.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-8201' May 16 13:49:34.346: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 16 13:49:34.347: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 May 16 13:49:38.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8201' May 16 13:49:38.491: INFO: stderr: "" May 16 13:49:38.491: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:49:38.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8201" for this suite. May 16 13:50:06.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:50:06.602: INFO: namespace kubectl-8201 deletion completed in 28.107621978s • [SLOW TEST:32.474 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:50:06.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-e4555fee-43d7-4808-9659-a0d6f1864b44 STEP: Creating a pod to test consume secrets May 16 13:50:06.742: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2458a03b-c6d3-4452-a0c5-b8101cf1615b" in namespace "projected-3165" to be "success or failure" May 16 13:50:06.768: INFO: Pod "pod-projected-secrets-2458a03b-c6d3-4452-a0c5-b8101cf1615b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.190112ms May 16 13:50:08.771: INFO: Pod "pod-projected-secrets-2458a03b-c6d3-4452-a0c5-b8101cf1615b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028994347s May 16 13:50:10.776: INFO: Pod "pod-projected-secrets-2458a03b-c6d3-4452-a0c5-b8101cf1615b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033675189s STEP: Saw pod success May 16 13:50:10.776: INFO: Pod "pod-projected-secrets-2458a03b-c6d3-4452-a0c5-b8101cf1615b" satisfied condition "success or failure" May 16 13:50:10.779: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-2458a03b-c6d3-4452-a0c5-b8101cf1615b container projected-secret-volume-test: STEP: delete the pod May 16 13:50:10.802: INFO: Waiting for pod pod-projected-secrets-2458a03b-c6d3-4452-a0c5-b8101cf1615b to disappear May 16 13:50:10.805: INFO: Pod pod-projected-secrets-2458a03b-c6d3-4452-a0c5-b8101cf1615b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:50:10.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3165" for this suite. May 16 13:50:16.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:50:16.914: INFO: namespace projected-3165 deletion completed in 6.104593826s • [SLOW TEST:10.311 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:50:16.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-868984d8-d149-4f60-9f03-1a9de3574221 in namespace container-probe-5203 May 16 13:50:21.033: INFO: Started pod liveness-868984d8-d149-4f60-9f03-1a9de3574221 in namespace container-probe-5203 STEP: checking the pod's current state and verifying that restartCount is present May 16 13:50:21.036: INFO: Initial restart count of pod liveness-868984d8-d149-4f60-9f03-1a9de3574221 is 0 May 16 13:50:41.136: INFO: Restart count of pod container-probe-5203/liveness-868984d8-d149-4f60-9f03-1a9de3574221 is now 1 (20.099787091s elapsed) May 16 13:51:01.180: INFO: Restart count of pod container-probe-5203/liveness-868984d8-d149-4f60-9f03-1a9de3574221 is now 2 (40.143865114s elapsed) May 16 13:51:21.249: INFO: Restart count of pod container-probe-5203/liveness-868984d8-d149-4f60-9f03-1a9de3574221 is now 3 (1m0.213443061s elapsed) May 16 13:51:41.327: INFO: Restart count of pod container-probe-5203/liveness-868984d8-d149-4f60-9f03-1a9de3574221 is now 4 (1m20.291321573s elapsed) May 16 13:52:41.643: INFO: Restart count of pod container-probe-5203/liveness-868984d8-d149-4f60-9f03-1a9de3574221 is now 5 (2m20.60706201s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:52:41.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5203" for this suite. May 16 13:52:47.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:52:47.810: INFO: namespace container-probe-5203 deletion completed in 6.103142592s • [SLOW TEST:150.895 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:52:47.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-4285 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4285 to expose endpoints map[] May 16 13:52:47.922: INFO: Get endpoints failed (34.137631ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 16 13:52:48.926: INFO: successfully validated that service endpoint-test2 in namespace services-4285 exposes endpoints map[] (1.037923451s elapsed) STEP: Creating pod pod1 in namespace services-4285 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4285 to expose endpoints map[pod1:[80]] May 16 13:52:52.974: INFO: successfully validated that service endpoint-test2 in namespace services-4285 exposes endpoints map[pod1:[80]] (4.041148564s elapsed) STEP: Creating pod pod2 in namespace services-4285 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4285 to expose endpoints map[pod1:[80] pod2:[80]] May 16 13:52:57.097: INFO: successfully validated that service endpoint-test2 in namespace services-4285 exposes endpoints map[pod1:[80] pod2:[80]] (4.119570252s elapsed) STEP: Deleting pod pod1 in namespace services-4285 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4285 to expose endpoints map[pod2:[80]] May 16 13:52:58.175: INFO: successfully validated that service endpoint-test2 in namespace services-4285 exposes endpoints map[pod2:[80]] (1.073373613s elapsed) STEP: Deleting pod pod2 in namespace services-4285 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4285 to expose endpoints map[] May 16 13:52:59.210: INFO: successfully validated that service endpoint-test2 in namespace services-4285 exposes endpoints map[] (1.028002242s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:52:59.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4285" for this suite. May 16 13:53:05.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:53:05.532: INFO: namespace services-4285 deletion completed in 6.129655618s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:17.722 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:53:05.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 16 13:53:05.632: INFO: Waiting up to 5m0s for pod "downward-api-3db7b934-c0ba-4688-98af-a62fe6d97f9e" in namespace "downward-api-5877" to be "success or failure" May 16 13:53:05.636: INFO: Pod "downward-api-3db7b934-c0ba-4688-98af-a62fe6d97f9e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.550412ms May 16 13:53:07.679: INFO: Pod "downward-api-3db7b934-c0ba-4688-98af-a62fe6d97f9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046317911s May 16 13:53:09.697: INFO: Pod "downward-api-3db7b934-c0ba-4688-98af-a62fe6d97f9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064560469s STEP: Saw pod success May 16 13:53:09.697: INFO: Pod "downward-api-3db7b934-c0ba-4688-98af-a62fe6d97f9e" satisfied condition "success or failure" May 16 13:53:09.700: INFO: Trying to get logs from node iruya-worker pod downward-api-3db7b934-c0ba-4688-98af-a62fe6d97f9e container dapi-container: STEP: delete the pod May 16 13:53:09.826: INFO: Waiting for pod downward-api-3db7b934-c0ba-4688-98af-a62fe6d97f9e to disappear May 16 13:53:09.880: INFO: Pod downward-api-3db7b934-c0ba-4688-98af-a62fe6d97f9e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:53:09.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5877" for this suite. May 16 13:53:15.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:53:15.976: INFO: namespace downward-api-5877 deletion completed in 6.091493833s • [SLOW TEST:10.443 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:53:15.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0516 13:53:26.080010 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 16 13:53:26.080: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:53:26.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3253" for this suite. May 16 13:53:32.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:53:32.189: INFO: namespace gc-3253 deletion completed in 6.105714292s • [SLOW TEST:16.213 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:53:32.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition May 16 13:53:32.255: INFO: Waiting up to 5m0s for pod "var-expansion-3b337e90-cad2-4b40-af90-730c25a952c4" in namespace "var-expansion-6296" to be "success or failure" May 16 13:53:32.258: INFO: Pod "var-expansion-3b337e90-cad2-4b40-af90-730c25a952c4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.310045ms May 16 13:53:34.329: INFO: Pod "var-expansion-3b337e90-cad2-4b40-af90-730c25a952c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074328056s May 16 13:53:36.334: INFO: Pod "var-expansion-3b337e90-cad2-4b40-af90-730c25a952c4": Phase="Running", Reason="", readiness=true. Elapsed: 4.078867703s May 16 13:53:38.339: INFO: Pod "var-expansion-3b337e90-cad2-4b40-af90-730c25a952c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083673935s STEP: Saw pod success May 16 13:53:38.339: INFO: Pod "var-expansion-3b337e90-cad2-4b40-af90-730c25a952c4" satisfied condition "success or failure" May 16 13:53:38.342: INFO: Trying to get logs from node iruya-worker pod var-expansion-3b337e90-cad2-4b40-af90-730c25a952c4 container dapi-container: STEP: delete the pod May 16 13:53:38.380: INFO: Waiting for pod var-expansion-3b337e90-cad2-4b40-af90-730c25a952c4 to disappear May 16 13:53:38.396: INFO: Pod var-expansion-3b337e90-cad2-4b40-af90-730c25a952c4 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:53:38.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6296" for this suite. May 16 13:53:44.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:53:44.513: INFO: namespace var-expansion-6296 deletion completed in 6.114250359s • [SLOW TEST:12.324 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:53:44.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 16 13:53:44.561: INFO: namespace kubectl-1150 May 16 13:53:44.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1150' May 16 13:53:47.715: INFO: stderr: "" May 16 13:53:47.715: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 16 13:53:48.719: INFO: Selector matched 1 pods for map[app:redis] May 16 13:53:48.719: INFO: Found 0 / 1 May 16 13:53:49.719: INFO: Selector matched 1 pods for map[app:redis] May 16 13:53:49.719: INFO: Found 0 / 1 May 16 13:53:50.722: INFO: Selector matched 1 pods for map[app:redis] May 16 13:53:50.722: INFO: Found 1 / 1 May 16 13:53:50.722: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 16 13:53:50.752: INFO: Selector matched 1 pods for map[app:redis] May 16 13:53:50.752: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 16 13:53:50.752: INFO: wait on redis-master startup in kubectl-1150 May 16 13:53:50.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2zg85 redis-master --namespace=kubectl-1150' May 16 13:53:50.886: INFO: stderr: "" May 16 13:53:50.886: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 16 May 13:53:50.444 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 16 May 13:53:50.444 # Server started, Redis version 3.2.12\n1:M 16 May 13:53:50.444 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 16 May 13:53:50.444 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 16 13:53:50.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1150' May 16 13:53:51.061: INFO: stderr: "" May 16 13:53:51.061: INFO: stdout: "service/rm2 exposed\n" May 16 13:53:51.130: INFO: Service rm2 in namespace kubectl-1150 found. STEP: exposing service May 16 13:53:53.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1150' May 16 13:53:53.272: INFO: stderr: "" May 16 13:53:53.272: INFO: stdout: "service/rm3 exposed\n" May 16 13:53:53.288: INFO: Service rm3 in namespace kubectl-1150 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:53:55.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1150" for this suite. May 16 13:54:17.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:54:17.400: INFO: namespace kubectl-1150 deletion completed in 22.102728899s • [SLOW TEST:32.886 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:54:17.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 16 13:54:17.481: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81ead4b5-cd60-4cd2-a9a3-346f17b78d5f" in namespace "projected-5742" to be "success or failure" May 16 13:54:17.487: INFO: Pod "downwardapi-volume-81ead4b5-cd60-4cd2-a9a3-346f17b78d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.910612ms May 16 13:54:19.491: INFO: Pod "downwardapi-volume-81ead4b5-cd60-4cd2-a9a3-346f17b78d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01056111s May 16 13:54:21.496: INFO: Pod "downwardapi-volume-81ead4b5-cd60-4cd2-a9a3-346f17b78d5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015436921s STEP: Saw pod success May 16 13:54:21.496: INFO: Pod "downwardapi-volume-81ead4b5-cd60-4cd2-a9a3-346f17b78d5f" satisfied condition "success or failure" May 16 13:54:21.499: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-81ead4b5-cd60-4cd2-a9a3-346f17b78d5f container client-container: STEP: delete the pod May 16 13:54:21.520: INFO: Waiting for pod downwardapi-volume-81ead4b5-cd60-4cd2-a9a3-346f17b78d5f to disappear May 16 13:54:21.525: INFO: Pod downwardapi-volume-81ead4b5-cd60-4cd2-a9a3-346f17b78d5f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:54:21.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5742" for this suite. May 16 13:54:27.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:54:27.605: INFO: namespace projected-5742 deletion completed in 6.07670984s • [SLOW TEST:10.204 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:54:27.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-4lc8 STEP: Creating a pod to test atomic-volume-subpath May 16 13:54:27.746: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4lc8" in namespace "subpath-9362" to be "success or failure" May 16 13:54:27.753: INFO: Pod "pod-subpath-test-secret-4lc8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.176547ms May 16 13:54:29.757: INFO: Pod "pod-subpath-test-secret-4lc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011586673s May 16 13:54:31.763: INFO: Pod "pod-subpath-test-secret-4lc8": Phase="Running", Reason="", readiness=true. Elapsed: 4.016812168s May 16 13:54:33.767: INFO: Pod "pod-subpath-test-secret-4lc8": Phase="Running", Reason="", readiness=true. Elapsed: 6.021310562s May 16 13:54:35.772: INFO: Pod "pod-subpath-test-secret-4lc8": Phase="Running", Reason="", readiness=true. Elapsed: 8.025778945s May 16 13:54:37.776: INFO: Pod "pod-subpath-test-secret-4lc8": Phase="Running", Reason="", readiness=true. Elapsed: 10.029774501s May 16 13:54:39.779: INFO: Pod "pod-subpath-test-secret-4lc8": Phase="Running", Reason="", readiness=true. Elapsed: 12.033571642s May 16 13:54:41.784: INFO: Pod "pod-subpath-test-secret-4lc8": Phase="Running", Reason="", readiness=true. Elapsed: 14.038131715s May 16 13:54:43.788: INFO: Pod "pod-subpath-test-secret-4lc8": Phase="Running", Reason="", readiness=true. Elapsed: 16.042369094s May 16 13:54:45.792: INFO: Pod "pod-subpath-test-secret-4lc8": Phase="Running", Reason="", readiness=true. Elapsed: 18.046381213s May 16 13:54:47.799: INFO: Pod "pod-subpath-test-secret-4lc8": Phase="Running", Reason="", readiness=true. Elapsed: 20.052744245s May 16 13:54:49.802: INFO: Pod "pod-subpath-test-secret-4lc8": Phase="Running", Reason="", readiness=true. Elapsed: 22.056524173s May 16 13:54:51.806: INFO: Pod "pod-subpath-test-secret-4lc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.060383886s STEP: Saw pod success May 16 13:54:51.806: INFO: Pod "pod-subpath-test-secret-4lc8" satisfied condition "success or failure" May 16 13:54:51.809: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-4lc8 container test-container-subpath-secret-4lc8: STEP: delete the pod May 16 13:54:51.879: INFO: Waiting for pod pod-subpath-test-secret-4lc8 to disappear May 16 13:54:51.973: INFO: Pod pod-subpath-test-secret-4lc8 no longer exists STEP: Deleting pod pod-subpath-test-secret-4lc8 May 16 13:54:51.973: INFO: Deleting pod "pod-subpath-test-secret-4lc8" in namespace "subpath-9362" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:54:51.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9362" for this suite. May 16 13:54:58.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:54:58.081: INFO: namespace subpath-9362 deletion completed in 6.084835155s • [SLOW TEST:30.476 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:54:58.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin May 16 13:54:58.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5562 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 16 13:55:01.970: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0516 13:55:01.895007 2492 log.go:172] (0xc0006f4420) (0xc0001aa140) Create stream\nI0516 13:55:01.895071 2492 log.go:172] (0xc0006f4420) (0xc0001aa140) Stream added, broadcasting: 1\nI0516 13:55:01.897366 2492 log.go:172] (0xc0006f4420) Reply frame received for 1\nI0516 13:55:01.897398 2492 log.go:172] (0xc0006f4420) (0xc0006ee5a0) Create stream\nI0516 13:55:01.897407 2492 log.go:172] (0xc0006f4420) (0xc0006ee5a0) Stream added, broadcasting: 3\nI0516 13:55:01.898403 2492 log.go:172] (0xc0006f4420) Reply frame received for 3\nI0516 13:55:01.898471 2492 log.go:172] (0xc0006f4420) (0xc0001aa1e0) Create stream\nI0516 13:55:01.898493 2492 log.go:172] (0xc0006f4420) (0xc0001aa1e0) Stream added, broadcasting: 5\nI0516 13:55:01.899374 2492 log.go:172] (0xc0006f4420) Reply frame received for 5\nI0516 13:55:01.899415 2492 log.go:172] (0xc0006f4420) (0xc0001aa280) Create stream\nI0516 13:55:01.899427 2492 log.go:172] (0xc0006f4420) (0xc0001aa280) Stream added, broadcasting: 7\nI0516 13:55:01.900361 2492 log.go:172] (0xc0006f4420) Reply frame received for 7\nI0516 13:55:01.900544 2492 log.go:172] (0xc0006ee5a0) (3) Writing data frame\nI0516 13:55:01.900677 2492 log.go:172] (0xc0006ee5a0) (3) Writing data frame\nI0516 13:55:01.901871 2492 log.go:172] (0xc0006f4420) Data frame received for 5\nI0516 13:55:01.901899 2492 log.go:172] (0xc0001aa1e0) (5) Data frame handling\nI0516 13:55:01.901927 2492 log.go:172] (0xc0001aa1e0) (5) Data frame sent\nI0516 13:55:01.902406 2492 log.go:172] (0xc0006f4420) Data frame received for 5\nI0516 13:55:01.902427 2492 log.go:172] (0xc0001aa1e0) (5) Data frame handling\nI0516 13:55:01.902453 2492 log.go:172] (0xc0001aa1e0) (5) Data frame sent\nI0516 13:55:01.946876 2492 log.go:172] (0xc0006f4420) Data frame received for 7\nI0516 13:55:01.946921 2492 log.go:172] (0xc0001aa280) (7) Data frame handling\nI0516 13:55:01.946948 2492 log.go:172] (0xc0006f4420) Data frame received for 5\nI0516 13:55:01.946965 2492 log.go:172] (0xc0001aa1e0) (5) Data frame handling\nI0516 13:55:01.947284 2492 log.go:172] (0xc0006f4420) Data frame received for 1\nI0516 13:55:01.947313 2492 log.go:172] (0xc0006f4420) (0xc0006ee5a0) Stream removed, broadcasting: 3\nI0516 13:55:01.947339 2492 log.go:172] (0xc0001aa140) (1) Data frame handling\nI0516 13:55:01.947355 2492 log.go:172] (0xc0001aa140) (1) Data frame sent\nI0516 13:55:01.947364 2492 log.go:172] (0xc0006f4420) (0xc0001aa140) Stream removed, broadcasting: 1\nI0516 13:55:01.947376 2492 log.go:172] (0xc0006f4420) Go away received\nI0516 13:55:01.947548 2492 log.go:172] (0xc0006f4420) (0xc0001aa140) Stream removed, broadcasting: 1\nI0516 13:55:01.947601 2492 log.go:172] (0xc0006f4420) (0xc0006ee5a0) Stream removed, broadcasting: 3\nI0516 13:55:01.947618 2492 log.go:172] (0xc0006f4420) (0xc0001aa1e0) Stream removed, broadcasting: 5\nI0516 13:55:01.947634 2492 log.go:172] (0xc0006f4420) (0xc0001aa280) Stream removed, broadcasting: 7\n" May 16 13:55:01.970: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:55:03.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5562" for this suite. May 16 13:55:14.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:55:14.114: INFO: namespace kubectl-5562 deletion completed in 10.135569338s • [SLOW TEST:16.032 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:55:14.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:55:18.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8604" for this suite. May 16 13:56:04.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:56:04.396: INFO: namespace kubelet-test-8604 deletion completed in 46.150640145s • [SLOW TEST:50.282 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:56:04.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-19d118bc-856c-4295-8efe-41dc990c0ba8 STEP: Creating a pod to test consume configMaps May 16 13:56:04.456: INFO: Waiting up to 5m0s for pod "pod-configmaps-3b2e1697-4407-47ae-8995-ad446e529118" in namespace "configmap-3039" to be "success or failure" May 16 13:56:04.481: INFO: Pod "pod-configmaps-3b2e1697-4407-47ae-8995-ad446e529118": Phase="Pending", Reason="", readiness=false. Elapsed: 24.74498ms May 16 13:56:06.486: INFO: Pod "pod-configmaps-3b2e1697-4407-47ae-8995-ad446e529118": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029089433s May 16 13:56:08.489: INFO: Pod "pod-configmaps-3b2e1697-4407-47ae-8995-ad446e529118": Phase="Running", Reason="", readiness=true. Elapsed: 4.032908562s May 16 13:56:10.494: INFO: Pod "pod-configmaps-3b2e1697-4407-47ae-8995-ad446e529118": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037106809s STEP: Saw pod success May 16 13:56:10.494: INFO: Pod "pod-configmaps-3b2e1697-4407-47ae-8995-ad446e529118" satisfied condition "success or failure" May 16 13:56:10.497: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-3b2e1697-4407-47ae-8995-ad446e529118 container configmap-volume-test: STEP: delete the pod May 16 13:56:10.537: INFO: Waiting for pod pod-configmaps-3b2e1697-4407-47ae-8995-ad446e529118 to disappear May 16 13:56:10.552: INFO: Pod pod-configmaps-3b2e1697-4407-47ae-8995-ad446e529118 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:56:10.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3039" for this suite. May 16 13:56:16.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:56:16.657: INFO: namespace configmap-3039 deletion completed in 6.102012765s • [SLOW TEST:12.261 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:56:16.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 16 13:56:21.800: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:56:22.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2709" for this suite. May 16 13:56:44.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:56:44.903: INFO: namespace replicaset-2709 deletion completed in 22.079457683s • [SLOW TEST:28.246 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:56:44.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-3676/configmap-test-839215b7-8b33-4340-99d4-49e7f970d56a STEP: Creating a pod to test consume configMaps May 16 13:56:44.976: INFO: Waiting up to 5m0s for pod "pod-configmaps-41f43389-6e15-4dca-9cc2-1ab24e1205a7" in namespace "configmap-3676" to be "success or failure" May 16 13:56:44.980: INFO: Pod "pod-configmaps-41f43389-6e15-4dca-9cc2-1ab24e1205a7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.936693ms May 16 13:56:46.984: INFO: Pod "pod-configmaps-41f43389-6e15-4dca-9cc2-1ab24e1205a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007662458s May 16 13:56:48.988: INFO: Pod "pod-configmaps-41f43389-6e15-4dca-9cc2-1ab24e1205a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01190263s STEP: Saw pod success May 16 13:56:48.988: INFO: Pod "pod-configmaps-41f43389-6e15-4dca-9cc2-1ab24e1205a7" satisfied condition "success or failure" May 16 13:56:48.991: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-41f43389-6e15-4dca-9cc2-1ab24e1205a7 container env-test: STEP: delete the pod May 16 13:56:49.100: INFO: Waiting for pod pod-configmaps-41f43389-6e15-4dca-9cc2-1ab24e1205a7 to disappear May 16 13:56:49.142: INFO: Pod pod-configmaps-41f43389-6e15-4dca-9cc2-1ab24e1205a7 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:56:49.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3676" for this suite. May 16 13:56:55.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:56:55.337: INFO: namespace configmap-3676 deletion completed in 6.191652462s • [SLOW TEST:10.434 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:56:55.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-9392 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9392 to expose endpoints map[] May 16 13:56:55.487: INFO: Get endpoints failed (15.889952ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 16 13:56:56.491: INFO: successfully validated that service multi-endpoint-test in namespace services-9392 exposes endpoints map[] (1.020259225s elapsed) STEP: Creating pod pod1 in namespace services-9392 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9392 to expose endpoints map[pod1:[100]] May 16 13:57:00.567: INFO: successfully validated that service multi-endpoint-test in namespace services-9392 exposes endpoints map[pod1:[100]] (4.068598679s elapsed) STEP: Creating pod pod2 in namespace services-9392 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9392 to expose endpoints map[pod1:[100] pod2:[101]] May 16 13:57:05.565: INFO: successfully validated that service multi-endpoint-test in namespace services-9392 exposes endpoints map[pod1:[100] pod2:[101]] (4.993173542s elapsed) STEP: Deleting pod pod1 in namespace services-9392 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9392 to expose endpoints map[pod2:[101]] May 16 13:57:06.609: INFO: successfully validated that service multi-endpoint-test in namespace services-9392 exposes endpoints map[pod2:[101]] (1.038795222s elapsed) STEP: Deleting pod pod2 in namespace services-9392 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9392 to expose endpoints map[] May 16 13:57:07.746: INFO: successfully validated that service multi-endpoint-test in namespace services-9392 exposes endpoints map[] (1.131594629s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:57:07.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9392" for this suite. May 16 13:57:13.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:57:13.998: INFO: namespace services-9392 deletion completed in 6.091344836s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:18.660 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:57:13.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3154 STEP: creating a selector STEP: Creating the service pods in kubernetes May 16 13:57:14.132: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 16 13:57:40.751: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.208:8080/dial?request=hostName&protocol=udp&host=10.244.2.28&port=8081&tries=1'] Namespace:pod-network-test-3154 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 13:57:40.751: INFO: >>> kubeConfig: /root/.kube/config I0516 13:57:40.777727 6 log.go:172] (0xc000b97b80) (0xc000a6cb40) Create stream I0516 13:57:40.777767 6 log.go:172] (0xc000b97b80) (0xc000a6cb40) Stream added, broadcasting: 1 I0516 13:57:40.779630 6 log.go:172] (0xc000b97b80) Reply frame received for 1 I0516 13:57:40.779670 6 log.go:172] (0xc000b97b80) (0xc0024080a0) Create stream I0516 13:57:40.779684 6 log.go:172] (0xc000b97b80) (0xc0024080a0) Stream added, broadcasting: 3 I0516 13:57:40.780608 6 log.go:172] (0xc000b97b80) Reply frame received for 3 I0516 13:57:40.780637 6 log.go:172] (0xc000b97b80) (0xc002408140) Create stream I0516 13:57:40.780647 6 log.go:172] (0xc000b97b80) (0xc002408140) Stream added, broadcasting: 5 I0516 13:57:40.781679 6 log.go:172] (0xc000b97b80) Reply frame received for 5 I0516 13:57:40.858031 6 log.go:172] (0xc000b97b80) Data frame received for 3 I0516 13:57:40.858068 6 log.go:172] (0xc0024080a0) (3) Data frame handling I0516 13:57:40.858090 6 log.go:172] (0xc0024080a0) (3) Data frame sent I0516 13:57:40.858399 6 log.go:172] (0xc000b97b80) Data frame received for 3 I0516 13:57:40.858416 6 log.go:172] (0xc0024080a0) (3) Data frame handling I0516 13:57:40.858440 6 log.go:172] (0xc000b97b80) Data frame received for 5 I0516 13:57:40.858447 6 log.go:172] (0xc002408140) (5) Data frame handling I0516 13:57:40.860169 6 log.go:172] (0xc000b97b80) Data frame received for 1 I0516 13:57:40.860241 6 log.go:172] (0xc000a6cb40) (1) Data frame handling I0516 13:57:40.860274 6 log.go:172] (0xc000a6cb40) (1) Data frame sent I0516 13:57:40.860301 6 log.go:172] (0xc000b97b80) (0xc000a6cb40) Stream removed, broadcasting: 1 I0516 13:57:40.860327 6 log.go:172] (0xc000b97b80) Go away received I0516 13:57:40.860412 6 log.go:172] (0xc000b97b80) (0xc000a6cb40) Stream removed, broadcasting: 1 I0516 13:57:40.860432 6 log.go:172] (0xc000b97b80) (0xc0024080a0) Stream removed, broadcasting: 3 I0516 13:57:40.860440 6 log.go:172] (0xc000b97b80) (0xc002408140) Stream removed, broadcasting: 5 May 16 13:57:40.860: INFO: Waiting for endpoints: map[] May 16 13:57:40.864: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.208:8080/dial?request=hostName&protocol=udp&host=10.244.1.207&port=8081&tries=1'] Namespace:pod-network-test-3154 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 13:57:40.864: INFO: >>> kubeConfig: /root/.kube/config I0516 13:57:40.893408 6 log.go:172] (0xc000b2c790) (0xc0010305a0) Create stream I0516 13:57:40.893446 6 log.go:172] (0xc000b2c790) (0xc0010305a0) Stream added, broadcasting: 1 I0516 13:57:40.895770 6 log.go:172] (0xc000b2c790) Reply frame received for 1 I0516 13:57:40.895798 6 log.go:172] (0xc000b2c790) (0xc001030820) Create stream I0516 13:57:40.895808 6 log.go:172] (0xc000b2c790) (0xc001030820) Stream added, broadcasting: 3 I0516 13:57:40.896699 6 log.go:172] (0xc000b2c790) Reply frame received for 3 I0516 13:57:40.896736 6 log.go:172] (0xc000b2c790) (0xc002408280) Create stream I0516 13:57:40.896752 6 log.go:172] (0xc000b2c790) (0xc002408280) Stream added, broadcasting: 5 I0516 13:57:40.897832 6 log.go:172] (0xc000b2c790) Reply frame received for 5 I0516 13:57:40.968453 6 log.go:172] (0xc000b2c790) Data frame received for 3 I0516 13:57:40.968482 6 log.go:172] (0xc001030820) (3) Data frame handling I0516 13:57:40.968506 6 log.go:172] (0xc001030820) (3) Data frame sent I0516 13:57:40.968962 6 log.go:172] (0xc000b2c790) Data frame received for 3 I0516 13:57:40.969002 6 log.go:172] (0xc001030820) (3) Data frame handling I0516 13:57:40.969035 6 log.go:172] (0xc000b2c790) Data frame received for 5 I0516 13:57:40.969066 6 log.go:172] (0xc002408280) (5) Data frame handling I0516 13:57:40.970701 6 log.go:172] (0xc000b2c790) Data frame received for 1 I0516 13:57:40.970751 6 log.go:172] (0xc0010305a0) (1) Data frame handling I0516 13:57:40.970773 6 log.go:172] (0xc0010305a0) (1) Data frame sent I0516 13:57:40.970805 6 log.go:172] (0xc000b2c790) (0xc0010305a0) Stream removed, broadcasting: 1 I0516 13:57:40.970822 6 log.go:172] (0xc000b2c790) Go away received I0516 13:57:40.970980 6 log.go:172] (0xc000b2c790) (0xc0010305a0) Stream removed, broadcasting: 1 I0516 13:57:40.971003 6 log.go:172] (0xc000b2c790) (0xc001030820) Stream removed, broadcasting: 3 I0516 13:57:40.971020 6 log.go:172] (0xc000b2c790) (0xc002408280) Stream removed, broadcasting: 5 May 16 13:57:40.971: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:57:40.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3154" for this suite. May 16 13:58:05.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:58:05.106: INFO: namespace pod-network-test-3154 deletion completed in 24.130526801s • [SLOW TEST:51.106 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:58:05.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-4674/configmap-test-9d8a3a94-3c49-484a-a3e9-3ac9be9391e9 STEP: Creating a pod to test consume configMaps May 16 13:58:05.208: INFO: Waiting up to 5m0s for pod "pod-configmaps-0e451432-b96a-4c7e-b44d-414194969e96" in namespace "configmap-4674" to be "success or failure" May 16 13:58:05.215: INFO: Pod "pod-configmaps-0e451432-b96a-4c7e-b44d-414194969e96": Phase="Pending", Reason="", readiness=false. Elapsed: 7.024188ms May 16 13:58:08.004: INFO: Pod "pod-configmaps-0e451432-b96a-4c7e-b44d-414194969e96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.796713561s May 16 13:58:10.009: INFO: Pod "pod-configmaps-0e451432-b96a-4c7e-b44d-414194969e96": Phase="Running", Reason="", readiness=true. Elapsed: 4.801660565s May 16 13:58:12.014: INFO: Pod "pod-configmaps-0e451432-b96a-4c7e-b44d-414194969e96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.80648552s STEP: Saw pod success May 16 13:58:12.014: INFO: Pod "pod-configmaps-0e451432-b96a-4c7e-b44d-414194969e96" satisfied condition "success or failure" May 16 13:58:12.018: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-0e451432-b96a-4c7e-b44d-414194969e96 container env-test: STEP: delete the pod May 16 13:58:12.031: INFO: Waiting for pod pod-configmaps-0e451432-b96a-4c7e-b44d-414194969e96 to disappear May 16 13:58:12.041: INFO: Pod pod-configmaps-0e451432-b96a-4c7e-b44d-414194969e96 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:58:12.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4674" for this suite. May 16 13:58:18.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:58:18.159: INFO: namespace configmap-4674 deletion completed in 6.114653107s • [SLOW TEST:13.053 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:58:18.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 16 13:58:22.778: INFO: Successfully updated pod "pod-update-546397df-a427-4ec2-bade-30e5ef1dae7a" STEP: verifying the updated pod is in kubernetes May 16 13:58:22.841: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:58:22.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8629" for this suite. May 16 13:58:44.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:58:44.927: INFO: namespace pods-8629 deletion completed in 22.083818034s • [SLOW TEST:26.768 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:58:44.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6767.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6767.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 16 13:58:51.029: INFO: DNS probes using dns-test-07a8ddf9-df2c-4a6b-9bd7-0880f05b7f37 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6767.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6767.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 16 13:58:57.345: INFO: File wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-36362a3d-6062-4e3b-a6df-a12860d5b200 contains 'foo.example.com. ' instead of 'bar.example.com.' May 16 13:58:57.348: INFO: File jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-36362a3d-6062-4e3b-a6df-a12860d5b200 contains 'foo.example.com. ' instead of 'bar.example.com.' May 16 13:58:57.348: INFO: Lookups using dns-6767/dns-test-36362a3d-6062-4e3b-a6df-a12860d5b200 failed for: [wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local] May 16 13:59:02.352: INFO: File wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-36362a3d-6062-4e3b-a6df-a12860d5b200 contains 'foo.example.com. ' instead of 'bar.example.com.' May 16 13:59:02.355: INFO: File jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-36362a3d-6062-4e3b-a6df-a12860d5b200 contains 'foo.example.com. ' instead of 'bar.example.com.' May 16 13:59:02.355: INFO: Lookups using dns-6767/dns-test-36362a3d-6062-4e3b-a6df-a12860d5b200 failed for: [wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local] May 16 13:59:07.353: INFO: File wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-36362a3d-6062-4e3b-a6df-a12860d5b200 contains 'foo.example.com. ' instead of 'bar.example.com.' May 16 13:59:07.357: INFO: File jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-36362a3d-6062-4e3b-a6df-a12860d5b200 contains 'foo.example.com. ' instead of 'bar.example.com.' May 16 13:59:07.357: INFO: Lookups using dns-6767/dns-test-36362a3d-6062-4e3b-a6df-a12860d5b200 failed for: [wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local] May 16 13:59:12.352: INFO: File wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-36362a3d-6062-4e3b-a6df-a12860d5b200 contains 'foo.example.com. ' instead of 'bar.example.com.' May 16 13:59:12.355: INFO: File jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-36362a3d-6062-4e3b-a6df-a12860d5b200 contains 'foo.example.com. ' instead of 'bar.example.com.' May 16 13:59:12.355: INFO: Lookups using dns-6767/dns-test-36362a3d-6062-4e3b-a6df-a12860d5b200 failed for: [wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local] May 16 13:59:17.352: INFO: File wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-36362a3d-6062-4e3b-a6df-a12860d5b200 contains 'foo.example.com. ' instead of 'bar.example.com.' May 16 13:59:17.356: INFO: File jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local from pod dns-6767/dns-test-36362a3d-6062-4e3b-a6df-a12860d5b200 contains 'foo.example.com. ' instead of 'bar.example.com.' May 16 13:59:17.356: INFO: Lookups using dns-6767/dns-test-36362a3d-6062-4e3b-a6df-a12860d5b200 failed for: [wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local] May 16 13:59:22.356: INFO: DNS probes using dns-test-36362a3d-6062-4e3b-a6df-a12860d5b200 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6767.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6767.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6767.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6767.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 16 13:59:31.195: INFO: DNS probes using dns-test-4a5e2bda-aa1a-485e-b458-14aded0fd91f succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:59:31.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6767" for this suite. May 16 13:59:37.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 13:59:37.439: INFO: namespace dns-6767 deletion completed in 6.09760474s • [SLOW TEST:52.511 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 13:59:37.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components May 16 13:59:37.469: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 16 13:59:37.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1831' May 16 13:59:37.762: INFO: stderr: "" May 16 13:59:37.762: INFO: stdout: "service/redis-slave created\n" May 16 13:59:37.762: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 16 13:59:37.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1831' May 16 13:59:38.053: INFO: stderr: "" May 16 13:59:38.053: INFO: stdout: "service/redis-master created\n" May 16 13:59:38.053: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 16 13:59:38.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1831' May 16 13:59:38.356: INFO: stderr: "" May 16 13:59:38.356: INFO: stdout: "service/frontend created\n" May 16 13:59:38.356: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 16 13:59:38.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1831' May 16 13:59:38.668: INFO: stderr: "" May 16 13:59:38.668: INFO: stdout: "deployment.apps/frontend created\n" May 16 13:59:38.668: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 16 13:59:38.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1831' May 16 13:59:38.947: INFO: stderr: "" May 16 13:59:38.947: INFO: stdout: "deployment.apps/redis-master created\n" May 16 13:59:38.947: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 16 13:59:38.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1831' May 16 13:59:39.227: INFO: stderr: "" May 16 13:59:39.227: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app May 16 13:59:39.227: INFO: Waiting for all frontend pods to be Running. May 16 13:59:49.277: INFO: Waiting for frontend to serve content. May 16 13:59:49.292: INFO: Trying to add a new entry to the guestbook. May 16 13:59:49.317: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 16 13:59:49.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1831' May 16 13:59:49.507: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 13:59:49.507: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 16 13:59:49.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1831' May 16 13:59:49.688: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 13:59:49.688: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 16 13:59:49.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1831' May 16 13:59:49.803: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 13:59:49.803: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 16 13:59:49.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1831' May 16 13:59:49.918: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 13:59:49.918: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 16 13:59:49.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1831' May 16 13:59:50.018: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 13:59:50.018: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 16 13:59:50.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1831' May 16 13:59:50.121: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 13:59:50.121: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 13:59:50.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1831" for this suite. May 16 14:00:28.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:00:28.292: INFO: namespace kubectl-1831 deletion completed in 38.147339834s • [SLOW TEST:50.853 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:00:28.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-573a5183-0e94-45b7-b401-34a044b7b186 STEP: Creating a pod to test consume configMaps May 16 14:00:28.371: INFO: Waiting up to 5m0s for pod "pod-configmaps-71835163-7c0d-496e-a7f4-38c8462fd494" in namespace "configmap-5716" to be "success or failure" May 16 14:00:28.375: INFO: Pod "pod-configmaps-71835163-7c0d-496e-a7f4-38c8462fd494": Phase="Pending", Reason="", readiness=false. Elapsed: 3.465549ms May 16 14:00:30.379: INFO: Pod "pod-configmaps-71835163-7c0d-496e-a7f4-38c8462fd494": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007883023s May 16 14:00:32.384: INFO: Pod "pod-configmaps-71835163-7c0d-496e-a7f4-38c8462fd494": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01224405s STEP: Saw pod success May 16 14:00:32.384: INFO: Pod "pod-configmaps-71835163-7c0d-496e-a7f4-38c8462fd494" satisfied condition "success or failure" May 16 14:00:32.386: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-71835163-7c0d-496e-a7f4-38c8462fd494 container configmap-volume-test: STEP: delete the pod May 16 14:00:32.407: INFO: Waiting for pod pod-configmaps-71835163-7c0d-496e-a7f4-38c8462fd494 to disappear May 16 14:00:32.411: INFO: Pod pod-configmaps-71835163-7c0d-496e-a7f4-38c8462fd494 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:00:32.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5716" for this suite. May 16 14:00:38.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:00:38.509: INFO: namespace configmap-5716 deletion completed in 6.095559381s • [SLOW TEST:10.216 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:00:38.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 16 14:00:46.631: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 14:00:46.645: INFO: Pod pod-with-prestop-http-hook still exists May 16 14:00:48.645: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 14:00:48.649: INFO: Pod pod-with-prestop-http-hook still exists May 16 14:00:50.645: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 14:00:50.650: INFO: Pod pod-with-prestop-http-hook still exists May 16 14:00:52.645: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 14:00:52.649: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:00:52.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1360" for this suite. May 16 14:01:14.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:01:14.767: INFO: namespace container-lifecycle-hook-1360 deletion completed in 22.108082641s • [SLOW TEST:36.257 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:01:14.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 16 14:01:14.831: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 16 14:01:14.845: INFO: Waiting for terminating namespaces to be deleted... May 16 14:01:14.847: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 16 14:01:14.851: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 16 14:01:14.851: INFO: Container kube-proxy ready: true, restart count 0 May 16 14:01:14.851: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 16 14:01:14.851: INFO: Container kindnet-cni ready: true, restart count 0 May 16 14:01:14.851: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 16 14:01:14.855: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 16 14:01:14.855: INFO: Container kube-proxy ready: true, restart count 0 May 16 14:01:14.855: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 16 14:01:14.855: INFO: Container kindnet-cni ready: true, restart count 0 May 16 14:01:14.855: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 16 14:01:14.855: INFO: Container coredns ready: true, restart count 0 May 16 14:01:14.855: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 16 14:01:14.855: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 May 16 14:01:14.941: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 May 16 14:01:14.941: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 May 16 14:01:14.941: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker May 16 14:01:14.941: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 May 16 14:01:14.941: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker May 16 14:01:14.941: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-12c56563-dcc6-4bd4-acde-943225378c7b.160f870b0a0b37dc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1237/filler-pod-12c56563-dcc6-4bd4-acde-943225378c7b to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-12c56563-dcc6-4bd4-acde-943225378c7b.160f870b5cfcaaaf], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-12c56563-dcc6-4bd4-acde-943225378c7b.160f870bbc5c508b], Reason = [Created], Message = [Created container filler-pod-12c56563-dcc6-4bd4-acde-943225378c7b] STEP: Considering event: Type = [Normal], Name = [filler-pod-12c56563-dcc6-4bd4-acde-943225378c7b.160f870bd305a608], Reason = [Started], Message = [Started container filler-pod-12c56563-dcc6-4bd4-acde-943225378c7b] STEP: Considering event: Type = [Normal], Name = [filler-pod-9f0ff765-a8c9-4131-99a4-6f64037b3cd4.160f870b0ac82e6c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1237/filler-pod-9f0ff765-a8c9-4131-99a4-6f64037b3cd4 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-9f0ff765-a8c9-4131-99a4-6f64037b3cd4.160f870ba92e27f6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9f0ff765-a8c9-4131-99a4-6f64037b3cd4.160f870be87c9627], Reason = [Created], Message = [Created container filler-pod-9f0ff765-a8c9-4131-99a4-6f64037b3cd4] STEP: Considering event: Type = [Normal], Name = [filler-pod-9f0ff765-a8c9-4131-99a4-6f64037b3cd4.160f870bf8db3d7f], Reason = [Started], Message = [Started container filler-pod-9f0ff765-a8c9-4131-99a4-6f64037b3cd4] STEP: Considering event: Type = [Warning], Name = [additional-pod.160f870c75ada168], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:01:22.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1237" for this suite. May 16 14:01:28.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:01:28.325: INFO: namespace sched-pred-1237 deletion completed in 6.146624802s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:13.559 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:01:28.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 16 14:01:32.947: INFO: Successfully updated pod "labelsupdated72e953b-076b-461a-afc2-c3f7e66e1c32" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:01:34.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6077" for this suite. May 16 14:01:56.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:01:57.062: INFO: namespace downward-api-6077 deletion completed in 22.091872276s • [SLOW TEST:28.737 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:01:57.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions May 16 14:01:57.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 16 14:01:57.289: INFO: stderr: "" May 16 14:01:57.289: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:01:57.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8879" for this suite. May 16 14:02:03.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:02:03.427: INFO: namespace kubectl-8879 deletion completed in 6.107323675s • [SLOW TEST:6.364 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:02:03.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 16 14:02:03.480: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:02:12.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1528" for this suite. May 16 14:02:34.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:02:34.175: INFO: namespace init-container-1528 deletion completed in 22.088161601s • [SLOW TEST:30.747 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:02:34.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-b0f07b15-83bc-4294-a6ab-1f21e0c25b91 STEP: Creating a pod to test consume secrets May 16 14:02:34.252: INFO: Waiting up to 5m0s for pod "pod-secrets-85aa5dda-3928-439c-8c66-330c6dbb03da" in namespace "secrets-5605" to be "success or failure" May 16 14:02:34.257: INFO: Pod "pod-secrets-85aa5dda-3928-439c-8c66-330c6dbb03da": Phase="Pending", Reason="", readiness=false. Elapsed: 5.016505ms May 16 14:02:36.262: INFO: Pod "pod-secrets-85aa5dda-3928-439c-8c66-330c6dbb03da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009486941s May 16 14:02:38.267: INFO: Pod "pod-secrets-85aa5dda-3928-439c-8c66-330c6dbb03da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014179391s STEP: Saw pod success May 16 14:02:38.267: INFO: Pod "pod-secrets-85aa5dda-3928-439c-8c66-330c6dbb03da" satisfied condition "success or failure" May 16 14:02:38.271: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-85aa5dda-3928-439c-8c66-330c6dbb03da container secret-volume-test: STEP: delete the pod May 16 14:02:38.289: INFO: Waiting for pod pod-secrets-85aa5dda-3928-439c-8c66-330c6dbb03da to disappear May 16 14:02:38.299: INFO: Pod pod-secrets-85aa5dda-3928-439c-8c66-330c6dbb03da no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:02:38.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5605" for this suite. May 16 14:02:44.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:02:44.394: INFO: namespace secrets-5605 deletion completed in 6.091386393s • [SLOW TEST:10.218 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:02:44.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 14:02:44.479: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 16 14:02:44.485: INFO: Number of nodes with available pods: 0 May 16 14:02:44.485: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 16 14:02:44.537: INFO: Number of nodes with available pods: 0 May 16 14:02:44.537: INFO: Node iruya-worker is running more than one daemon pod May 16 14:02:45.541: INFO: Number of nodes with available pods: 0 May 16 14:02:45.541: INFO: Node iruya-worker is running more than one daemon pod May 16 14:02:46.541: INFO: Number of nodes with available pods: 0 May 16 14:02:46.541: INFO: Node iruya-worker is running more than one daemon pod May 16 14:02:47.541: INFO: Number of nodes with available pods: 1 May 16 14:02:47.541: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 16 14:02:47.572: INFO: Number of nodes with available pods: 1 May 16 14:02:47.572: INFO: Number of running nodes: 0, number of available pods: 1 May 16 14:02:48.577: INFO: Number of nodes with available pods: 0 May 16 14:02:48.578: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 16 14:02:48.587: INFO: Number of nodes with available pods: 0 May 16 14:02:48.587: INFO: Node iruya-worker is running more than one daemon pod May 16 14:02:49.620: INFO: Number of nodes with available pods: 0 May 16 14:02:49.620: INFO: Node iruya-worker is running more than one daemon pod May 16 14:02:50.593: INFO: Number of nodes with available pods: 0 May 16 14:02:50.593: INFO: Node iruya-worker is running more than one daemon pod May 16 14:02:51.592: INFO: Number of nodes with available pods: 0 May 16 14:02:51.592: INFO: Node iruya-worker is running more than one daemon pod May 16 14:02:52.592: INFO: Number of nodes with available pods: 0 May 16 14:02:52.592: INFO: Node iruya-worker is running more than one daemon pod May 16 14:02:53.591: INFO: Number of nodes with available pods: 0 May 16 14:02:53.591: INFO: Node iruya-worker is running more than one daemon pod May 16 14:02:54.591: INFO: Number of nodes with available pods: 0 May 16 14:02:54.591: INFO: Node iruya-worker is running more than one daemon pod May 16 14:02:55.591: INFO: Number of nodes with available pods: 0 May 16 14:02:55.591: INFO: Node iruya-worker is running more than one daemon pod May 16 14:02:56.592: INFO: Number of nodes with available pods: 0 May 16 14:02:56.592: INFO: Node iruya-worker is running more than one daemon pod May 16 14:02:57.607: INFO: Number of nodes with available pods: 0 May 16 14:02:57.607: INFO: Node iruya-worker is running more than one daemon pod May 16 14:02:58.592: INFO: Number of nodes with available pods: 0 May 16 14:02:58.592: INFO: Node iruya-worker is running more than one daemon pod May 16 14:02:59.591: INFO: Number of nodes with available pods: 0 May 16 14:02:59.591: INFO: Node iruya-worker is running more than one daemon pod May 16 14:03:00.592: INFO: Number of nodes with available pods: 0 May 16 14:03:00.592: INFO: Node iruya-worker is running more than one daemon pod May 16 14:03:01.591: INFO: Number of nodes with available pods: 0 May 16 14:03:01.591: INFO: Node iruya-worker is running more than one daemon pod May 16 14:03:02.592: INFO: Number of nodes with available pods: 0 May 16 14:03:02.592: INFO: Node iruya-worker is running more than one daemon pod May 16 14:03:03.735: INFO: Number of nodes with available pods: 0 May 16 14:03:03.735: INFO: Node iruya-worker is running more than one daemon pod May 16 14:03:04.655: INFO: Number of nodes with available pods: 0 May 16 14:03:04.655: INFO: Node iruya-worker is running more than one daemon pod May 16 14:03:05.592: INFO: Number of nodes with available pods: 1 May 16 14:03:05.592: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2755, will wait for the garbage collector to delete the pods May 16 14:03:05.657: INFO: Deleting DaemonSet.extensions daemon-set took: 6.324004ms May 16 14:03:05.957: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.289067ms May 16 14:03:12.261: INFO: Number of nodes with available pods: 0 May 16 14:03:12.261: INFO: Number of running nodes: 0, number of available pods: 0 May 16 14:03:12.263: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2755/daemonsets","resourceVersion":"11224699"},"items":null} May 16 14:03:12.266: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2755/pods","resourceVersion":"11224699"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:03:12.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2755" for this suite. May 16 14:03:18.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:03:18.420: INFO: namespace daemonsets-2755 deletion completed in 6.11931833s • [SLOW TEST:34.026 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:03:18.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-5f6c41f6-d46d-473d-a35b-ef68a43f2467 STEP: Creating secret with name s-test-opt-upd-37ad64b2-89a8-47fb-86cf-af4c5390df01 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5f6c41f6-d46d-473d-a35b-ef68a43f2467 STEP: Updating secret s-test-opt-upd-37ad64b2-89a8-47fb-86cf-af4c5390df01 STEP: Creating secret with name s-test-opt-create-1ea02727-5d1c-4071-af68-82b8351bc058 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:03:26.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5170" for this suite. May 16 14:03:48.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:03:48.689: INFO: namespace secrets-5170 deletion completed in 22.091026469s • [SLOW TEST:30.269 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:03:48.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 14:03:48.792: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 16 14:03:48.798: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:03:48.803: INFO: Number of nodes with available pods: 0 May 16 14:03:48.803: INFO: Node iruya-worker is running more than one daemon pod May 16 14:03:49.810: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:03:49.813: INFO: Number of nodes with available pods: 0 May 16 14:03:49.813: INFO: Node iruya-worker is running more than one daemon pod May 16 14:03:50.853: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:03:50.856: INFO: Number of nodes with available pods: 0 May 16 14:03:50.857: INFO: Node iruya-worker is running more than one daemon pod May 16 14:03:52.040: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:03:52.050: INFO: Number of nodes with available pods: 0 May 16 14:03:52.050: INFO: Node iruya-worker is running more than one daemon pod May 16 14:03:52.808: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:03:52.811: INFO: Number of nodes with available pods: 0 May 16 14:03:52.811: INFO: Node iruya-worker is running more than one daemon pod May 16 14:03:53.808: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:03:53.812: INFO: Number of nodes with available pods: 2 May 16 14:03:53.812: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 16 14:03:53.866: INFO: Wrong image for pod: daemon-set-p4rxw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:03:53.866: INFO: Wrong image for pod: daemon-set-pz8z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:03:53.876: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:03:54.880: INFO: Wrong image for pod: daemon-set-p4rxw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:03:54.880: INFO: Wrong image for pod: daemon-set-pz8z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:03:54.885: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:03:55.882: INFO: Wrong image for pod: daemon-set-p4rxw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:03:55.882: INFO: Wrong image for pod: daemon-set-pz8z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:03:55.886: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:03:56.882: INFO: Wrong image for pod: daemon-set-p4rxw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:03:56.882: INFO: Wrong image for pod: daemon-set-pz8z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:03:56.882: INFO: Pod daemon-set-pz8z9 is not available May 16 14:03:56.886: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:03:57.882: INFO: Wrong image for pod: daemon-set-p4rxw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:03:57.882: INFO: Wrong image for pod: daemon-set-pz8z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:03:57.882: INFO: Pod daemon-set-pz8z9 is not available May 16 14:03:57.886: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:03:58.881: INFO: Wrong image for pod: daemon-set-p4rxw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:03:58.881: INFO: Wrong image for pod: daemon-set-pz8z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:03:58.881: INFO: Pod daemon-set-pz8z9 is not available May 16 14:03:58.885: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:03:59.882: INFO: Wrong image for pod: daemon-set-p4rxw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:03:59.882: INFO: Wrong image for pod: daemon-set-pz8z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:03:59.882: INFO: Pod daemon-set-pz8z9 is not available May 16 14:03:59.886: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:04:00.883: INFO: Wrong image for pod: daemon-set-p4rxw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:04:00.883: INFO: Wrong image for pod: daemon-set-pz8z9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:04:00.883: INFO: Pod daemon-set-pz8z9 is not available May 16 14:04:00.885: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:04:01.894: INFO: Pod daemon-set-gsp4w is not available May 16 14:04:01.895: INFO: Wrong image for pod: daemon-set-p4rxw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:04:01.932: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:04:02.881: INFO: Pod daemon-set-gsp4w is not available May 16 14:04:02.881: INFO: Wrong image for pod: daemon-set-p4rxw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:04:02.884: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:04:03.881: INFO: Pod daemon-set-gsp4w is not available May 16 14:04:03.881: INFO: Wrong image for pod: daemon-set-p4rxw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:04:03.884: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:04:04.883: INFO: Pod daemon-set-gsp4w is not available May 16 14:04:04.883: INFO: Wrong image for pod: daemon-set-p4rxw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:04:04.886: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:04:05.881: INFO: Wrong image for pod: daemon-set-p4rxw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:04:05.884: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:04:06.882: INFO: Wrong image for pod: daemon-set-p4rxw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:04:06.882: INFO: Pod daemon-set-p4rxw is not available May 16 14:04:06.886: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:04:07.881: INFO: Wrong image for pod: daemon-set-p4rxw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:04:07.881: INFO: Pod daemon-set-p4rxw is not available May 16 14:04:07.885: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:04:08.881: INFO: Wrong image for pod: daemon-set-p4rxw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:04:08.881: INFO: Pod daemon-set-p4rxw is not available May 16 14:04:08.885: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:04:09.882: INFO: Wrong image for pod: daemon-set-p4rxw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:04:09.882: INFO: Pod daemon-set-p4rxw is not available May 16 14:04:09.886: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:04:10.884: INFO: Wrong image for pod: daemon-set-p4rxw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:04:10.884: INFO: Pod daemon-set-p4rxw is not available May 16 14:04:10.888: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:04:11.882: INFO: Wrong image for pod: daemon-set-p4rxw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 16 14:04:11.882: INFO: Pod daemon-set-p4rxw is not available May 16 14:04:11.886: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:04:12.881: INFO: Pod daemon-set-vg4z6 is not available May 16 14:04:12.884: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 16 14:04:12.888: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:04:12.892: INFO: Number of nodes with available pods: 1 May 16 14:04:12.892: INFO: Node iruya-worker is running more than one daemon pod May 16 14:04:13.896: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:04:13.899: INFO: Number of nodes with available pods: 1 May 16 14:04:13.899: INFO: Node iruya-worker is running more than one daemon pod May 16 14:04:14.900: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:04:14.902: INFO: Number of nodes with available pods: 1 May 16 14:04:14.902: INFO: Node iruya-worker is running more than one daemon pod May 16 14:04:15.898: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:04:15.902: INFO: Number of nodes with available pods: 2 May 16 14:04:15.902: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-930, will wait for the garbage collector to delete the pods May 16 14:04:15.974: INFO: Deleting DaemonSet.extensions daemon-set took: 6.727776ms May 16 14:04:16.277: INFO: Terminating DaemonSet.extensions daemon-set pods took: 302.780291ms May 16 14:04:22.297: INFO: Number of nodes with available pods: 0 May 16 14:04:22.297: INFO: Number of running nodes: 0, number of available pods: 0 May 16 14:04:22.300: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-930/daemonsets","resourceVersion":"11224977"},"items":null} May 16 14:04:22.303: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-930/pods","resourceVersion":"11224977"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:04:22.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-930" for this suite. May 16 14:04:28.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:04:28.450: INFO: namespace daemonsets-930 deletion completed in 6.13352187s • [SLOW TEST:39.761 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:04:28.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command May 16 14:04:28.501: INFO: Waiting up to 5m0s for pod "var-expansion-4485608b-9281-4d60-a5c8-5cedd07b1b83" in namespace "var-expansion-5473" to be "success or failure" May 16 14:04:28.511: INFO: Pod "var-expansion-4485608b-9281-4d60-a5c8-5cedd07b1b83": Phase="Pending", Reason="", readiness=false. Elapsed: 9.742134ms May 16 14:04:30.603: INFO: Pod "var-expansion-4485608b-9281-4d60-a5c8-5cedd07b1b83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101508173s May 16 14:04:32.608: INFO: Pod "var-expansion-4485608b-9281-4d60-a5c8-5cedd07b1b83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106317414s STEP: Saw pod success May 16 14:04:32.608: INFO: Pod "var-expansion-4485608b-9281-4d60-a5c8-5cedd07b1b83" satisfied condition "success or failure" May 16 14:04:32.610: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-4485608b-9281-4d60-a5c8-5cedd07b1b83 container dapi-container: STEP: delete the pod May 16 14:04:32.643: INFO: Waiting for pod var-expansion-4485608b-9281-4d60-a5c8-5cedd07b1b83 to disappear May 16 14:04:32.650: INFO: Pod var-expansion-4485608b-9281-4d60-a5c8-5cedd07b1b83 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:04:32.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5473" for this suite. May 16 14:04:38.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:04:38.756: INFO: namespace var-expansion-5473 deletion completed in 6.102679672s • [SLOW TEST:10.305 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:04:38.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-5a6e9e06-62b4-4788-942e-f81cd299f6b8 in namespace container-probe-8173 May 16 14:04:42.901: INFO: Started pod busybox-5a6e9e06-62b4-4788-942e-f81cd299f6b8 in namespace container-probe-8173 STEP: checking the pod's current state and verifying that restartCount is present May 16 14:04:42.904: INFO: Initial restart count of pod busybox-5a6e9e06-62b4-4788-942e-f81cd299f6b8 is 0 May 16 14:05:31.003: INFO: Restart count of pod container-probe-8173/busybox-5a6e9e06-62b4-4788-942e-f81cd299f6b8 is now 1 (48.098972418s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:05:31.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8173" for this suite. May 16 14:05:37.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:05:37.151: INFO: namespace container-probe-8173 deletion completed in 6.134477296s • [SLOW TEST:58.395 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:05:37.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 16 14:05:37.204: INFO: Waiting up to 5m0s for pod "downward-api-6ef7bfe3-962c-4877-83ad-b1bba7a199c4" in namespace "downward-api-2471" to be "success or failure" May 16 14:05:37.218: INFO: Pod "downward-api-6ef7bfe3-962c-4877-83ad-b1bba7a199c4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.355928ms May 16 14:05:39.221: INFO: Pod "downward-api-6ef7bfe3-962c-4877-83ad-b1bba7a199c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017434974s May 16 14:05:41.225: INFO: Pod "downward-api-6ef7bfe3-962c-4877-83ad-b1bba7a199c4": Phase="Running", Reason="", readiness=true. Elapsed: 4.021125021s May 16 14:05:43.229: INFO: Pod "downward-api-6ef7bfe3-962c-4877-83ad-b1bba7a199c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025198193s STEP: Saw pod success May 16 14:05:43.229: INFO: Pod "downward-api-6ef7bfe3-962c-4877-83ad-b1bba7a199c4" satisfied condition "success or failure" May 16 14:05:43.232: INFO: Trying to get logs from node iruya-worker2 pod downward-api-6ef7bfe3-962c-4877-83ad-b1bba7a199c4 container dapi-container: STEP: delete the pod May 16 14:05:43.255: INFO: Waiting for pod downward-api-6ef7bfe3-962c-4877-83ad-b1bba7a199c4 to disappear May 16 14:05:43.260: INFO: Pod downward-api-6ef7bfe3-962c-4877-83ad-b1bba7a199c4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:05:43.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2471" for this suite. May 16 14:05:49.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:05:49.366: INFO: namespace downward-api-2471 deletion completed in 6.102383217s • [SLOW TEST:12.214 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:05:49.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-cce827ec-c741-41cb-b980-044dfaf2e20a STEP: Creating a pod to test consume secrets May 16 14:05:49.526: INFO: Waiting up to 5m0s for pod "pod-secrets-5f720c90-225e-46aa-9b62-2ace5d5b529e" in namespace "secrets-3830" to be "success or failure" May 16 14:05:49.574: INFO: Pod "pod-secrets-5f720c90-225e-46aa-9b62-2ace5d5b529e": Phase="Pending", Reason="", readiness=false. Elapsed: 48.343354ms May 16 14:05:51.579: INFO: Pod "pod-secrets-5f720c90-225e-46aa-9b62-2ace5d5b529e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053051074s May 16 14:05:53.583: INFO: Pod "pod-secrets-5f720c90-225e-46aa-9b62-2ace5d5b529e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057477066s STEP: Saw pod success May 16 14:05:53.583: INFO: Pod "pod-secrets-5f720c90-225e-46aa-9b62-2ace5d5b529e" satisfied condition "success or failure" May 16 14:05:53.587: INFO: Trying to get logs from node iruya-worker pod pod-secrets-5f720c90-225e-46aa-9b62-2ace5d5b529e container secret-volume-test: STEP: delete the pod May 16 14:05:53.623: INFO: Waiting for pod pod-secrets-5f720c90-225e-46aa-9b62-2ace5d5b529e to disappear May 16 14:05:53.626: INFO: Pod pod-secrets-5f720c90-225e-46aa-9b62-2ace5d5b529e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:05:53.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3830" for this suite. May 16 14:05:59.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:05:59.755: INFO: namespace secrets-3830 deletion completed in 6.12299743s STEP: Destroying namespace "secret-namespace-4658" for this suite. May 16 14:06:05.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:06:05.852: INFO: namespace secret-namespace-4658 deletion completed in 6.096313905s • [SLOW TEST:16.485 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:06:05.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 16 14:06:05.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6edeef9-e883-478c-aecf-57bc405dc5ff" in namespace "projected-4545" to be "success or failure" May 16 14:06:05.927: INFO: Pod "downwardapi-volume-c6edeef9-e883-478c-aecf-57bc405dc5ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.878061ms May 16 14:06:07.931: INFO: Pod "downwardapi-volume-c6edeef9-e883-478c-aecf-57bc405dc5ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007064498s May 16 14:06:09.936: INFO: Pod "downwardapi-volume-c6edeef9-e883-478c-aecf-57bc405dc5ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011843592s STEP: Saw pod success May 16 14:06:09.936: INFO: Pod "downwardapi-volume-c6edeef9-e883-478c-aecf-57bc405dc5ff" satisfied condition "success or failure" May 16 14:06:09.940: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-c6edeef9-e883-478c-aecf-57bc405dc5ff container client-container: STEP: delete the pod May 16 14:06:09.988: INFO: Waiting for pod downwardapi-volume-c6edeef9-e883-478c-aecf-57bc405dc5ff to disappear May 16 14:06:09.999: INFO: Pod downwardapi-volume-c6edeef9-e883-478c-aecf-57bc405dc5ff no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:06:09.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4545" for this suite. May 16 14:06:16.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:06:16.093: INFO: namespace projected-4545 deletion completed in 6.090317668s • [SLOW TEST:10.241 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:06:16.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 16 14:06:23.073: INFO: 2 pods remaining May 16 14:06:23.073: INFO: 0 pods has nil DeletionTimestamp May 16 14:06:23.073: INFO: STEP: Gathering metrics W0516 14:06:24.384195 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 16 14:06:24.384: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:06:24.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3849" for this suite. May 16 14:06:30.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:06:30.846: INFO: namespace gc-3849 deletion completed in 6.21608912s • [SLOW TEST:14.752 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:06:30.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:06:57.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3688" for this suite. May 16 14:07:03.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:07:03.161: INFO: namespace namespaces-3688 deletion completed in 6.099763808s STEP: Destroying namespace "nsdeletetest-3161" for this suite. May 16 14:07:03.163: INFO: Namespace nsdeletetest-3161 was already deleted STEP: Destroying namespace "nsdeletetest-574" for this suite. May 16 14:07:09.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:07:09.246: INFO: namespace nsdeletetest-574 deletion completed in 6.082734619s • [SLOW TEST:38.400 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:07:09.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 16 14:07:09.934: INFO: Pod name wrapped-volume-race-07ce2a67-db4c-499a-adfe-5712290bf414: Found 0 pods out of 5 May 16 14:07:14.941: INFO: Pod name wrapped-volume-race-07ce2a67-db4c-499a-adfe-5712290bf414: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-07ce2a67-db4c-499a-adfe-5712290bf414 in namespace emptydir-wrapper-9507, will wait for the garbage collector to delete the pods May 16 14:07:31.030: INFO: Deleting ReplicationController wrapped-volume-race-07ce2a67-db4c-499a-adfe-5712290bf414 took: 12.651068ms May 16 14:07:31.330: INFO: Terminating ReplicationController wrapped-volume-race-07ce2a67-db4c-499a-adfe-5712290bf414 pods took: 300.314981ms STEP: Creating RC which spawns configmap-volume pods May 16 14:08:12.655: INFO: Pod name wrapped-volume-race-bdb1b7cd-6154-4831-80fc-4ccc39e48aee: Found 0 pods out of 5 May 16 14:08:17.663: INFO: Pod name wrapped-volume-race-bdb1b7cd-6154-4831-80fc-4ccc39e48aee: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-bdb1b7cd-6154-4831-80fc-4ccc39e48aee in namespace emptydir-wrapper-9507, will wait for the garbage collector to delete the pods May 16 14:08:31.748: INFO: Deleting ReplicationController wrapped-volume-race-bdb1b7cd-6154-4831-80fc-4ccc39e48aee took: 8.054841ms May 16 14:08:32.049: INFO: Terminating ReplicationController wrapped-volume-race-bdb1b7cd-6154-4831-80fc-4ccc39e48aee pods took: 300.286259ms STEP: Creating RC which spawns configmap-volume pods May 16 14:09:13.379: INFO: Pod name wrapped-volume-race-68d79e55-4dcb-450d-b5b8-88dea21a88dd: Found 0 pods out of 5 May 16 14:09:18.387: INFO: Pod name wrapped-volume-race-68d79e55-4dcb-450d-b5b8-88dea21a88dd: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-68d79e55-4dcb-450d-b5b8-88dea21a88dd in namespace emptydir-wrapper-9507, will wait for the garbage collector to delete the pods May 16 14:09:34.493: INFO: Deleting ReplicationController wrapped-volume-race-68d79e55-4dcb-450d-b5b8-88dea21a88dd took: 8.273742ms May 16 14:09:34.793: INFO: Terminating ReplicationController wrapped-volume-race-68d79e55-4dcb-450d-b5b8-88dea21a88dd pods took: 300.275183ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:10:22.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9507" for this suite. May 16 14:10:31.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:10:31.095: INFO: namespace emptydir-wrapper-9507 deletion completed in 8.114809181s • [SLOW TEST:201.849 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:10:31.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-0e987fce-fdc9-4e7d-981b-ed633fb9e08a STEP: Creating a pod to test consume configMaps May 16 14:10:31.180: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4bee6b38-3089-4ec9-a147-6ad63bcdd35c" in namespace "projected-1321" to be "success or failure" May 16 14:10:31.184: INFO: Pod "pod-projected-configmaps-4bee6b38-3089-4ec9-a147-6ad63bcdd35c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.966447ms May 16 14:10:33.189: INFO: Pod "pod-projected-configmaps-4bee6b38-3089-4ec9-a147-6ad63bcdd35c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008627765s May 16 14:10:35.192: INFO: Pod "pod-projected-configmaps-4bee6b38-3089-4ec9-a147-6ad63bcdd35c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012245384s STEP: Saw pod success May 16 14:10:35.193: INFO: Pod "pod-projected-configmaps-4bee6b38-3089-4ec9-a147-6ad63bcdd35c" satisfied condition "success or failure" May 16 14:10:35.195: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-4bee6b38-3089-4ec9-a147-6ad63bcdd35c container projected-configmap-volume-test: STEP: delete the pod May 16 14:10:35.234: INFO: Waiting for pod pod-projected-configmaps-4bee6b38-3089-4ec9-a147-6ad63bcdd35c to disappear May 16 14:10:35.261: INFO: Pod pod-projected-configmaps-4bee6b38-3089-4ec9-a147-6ad63bcdd35c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:10:35.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1321" for this suite. May 16 14:10:41.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:10:41.342: INFO: namespace projected-1321 deletion completed in 6.077534859s • [SLOW TEST:10.246 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:10:41.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 16 14:10:41.415: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 16 14:10:41.435: INFO: Waiting for terminating namespaces to be deleted... May 16 14:10:41.438: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 16 14:10:41.444: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 16 14:10:41.444: INFO: Container kube-proxy ready: true, restart count 0 May 16 14:10:41.444: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 16 14:10:41.444: INFO: Container kindnet-cni ready: true, restart count 0 May 16 14:10:41.444: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 16 14:10:41.450: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 16 14:10:41.450: INFO: Container coredns ready: true, restart count 0 May 16 14:10:41.450: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 16 14:10:41.450: INFO: Container coredns ready: true, restart count 0 May 16 14:10:41.450: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 16 14:10:41.450: INFO: Container kube-proxy ready: true, restart count 0 May 16 14:10:41.450: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 16 14:10:41.450: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160f878ef00efd26], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:10:42.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6742" for this suite. May 16 14:10:48.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:10:48.622: INFO: namespace sched-pred-6742 deletion completed in 6.144740545s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.279 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:10:48.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 16 14:10:48.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5758' May 16 14:10:51.372: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 16 14:10:51.372: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 16 14:10:51.388: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 16 14:10:51.414: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 16 14:10:51.479: INFO: scanned /root for discovery docs: May 16 14:10:51.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5758' May 16 14:11:08.354: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 16 14:11:08.354: INFO: stdout: "Created e2e-test-nginx-rc-ce6b89349aea08cfb7bee499193bca10\nScaling up e2e-test-nginx-rc-ce6b89349aea08cfb7bee499193bca10 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ce6b89349aea08cfb7bee499193bca10 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ce6b89349aea08cfb7bee499193bca10 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 16 14:11:08.354: INFO: stdout: "Created e2e-test-nginx-rc-ce6b89349aea08cfb7bee499193bca10\nScaling up e2e-test-nginx-rc-ce6b89349aea08cfb7bee499193bca10 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ce6b89349aea08cfb7bee499193bca10 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ce6b89349aea08cfb7bee499193bca10 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 16 14:11:08.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5758' May 16 14:11:08.441: INFO: stderr: "" May 16 14:11:08.441: INFO: stdout: "e2e-test-nginx-rc-ce6b89349aea08cfb7bee499193bca10-rj4kg " May 16 14:11:08.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ce6b89349aea08cfb7bee499193bca10-rj4kg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5758' May 16 14:11:08.536: INFO: stderr: "" May 16 14:11:08.536: INFO: stdout: "true" May 16 14:11:08.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ce6b89349aea08cfb7bee499193bca10-rj4kg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5758' May 16 14:11:08.617: INFO: stderr: "" May 16 14:11:08.617: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 16 14:11:08.617: INFO: e2e-test-nginx-rc-ce6b89349aea08cfb7bee499193bca10-rj4kg is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 May 16 14:11:08.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5758' May 16 14:11:08.722: INFO: stderr: "" May 16 14:11:08.722: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:11:08.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5758" for this suite. May 16 14:11:30.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:11:30.840: INFO: namespace kubectl-5758 deletion completed in 22.104260672s • [SLOW TEST:42.218 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:11:30.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 16 14:11:38.984: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 14:11:38.992: INFO: Pod pod-with-poststart-http-hook still exists May 16 14:11:40.993: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 14:11:40.998: INFO: Pod pod-with-poststart-http-hook still exists May 16 14:11:42.993: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 14:11:42.997: INFO: Pod pod-with-poststart-http-hook still exists May 16 14:11:44.992: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 14:11:44.996: INFO: Pod pod-with-poststart-http-hook still exists May 16 14:11:46.992: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 14:11:46.998: INFO: Pod pod-with-poststart-http-hook still exists May 16 14:11:48.992: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 14:11:48.997: INFO: Pod pod-with-poststart-http-hook still exists May 16 14:11:50.992: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 14:11:50.996: INFO: Pod pod-with-poststart-http-hook still exists May 16 14:11:52.992: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 16 14:11:52.997: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:11:52.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2208" for this suite. May 16 14:12:15.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:12:15.102: INFO: namespace container-lifecycle-hook-2208 deletion completed in 22.100884871s • [SLOW TEST:44.262 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:12:15.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 16 14:12:15.212: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:12:32.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6506" for this suite. May 16 14:12:38.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:12:38.295: INFO: namespace pods-6506 deletion completed in 6.094524145s • [SLOW TEST:23.193 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:12:38.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-86189627-4342-40e2-9158-5be6b8f25f03 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:12:38.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4891" for this suite. May 16 14:12:44.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:12:44.443: INFO: namespace configmap-4891 deletion completed in 6.084144638s • [SLOW TEST:6.148 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:12:44.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 16 14:12:44.495: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. May 16 14:12:44.854: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 16 14:12:47.137: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235164, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235164, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235164, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235164, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 14:12:49.185: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235164, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235164, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235164, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235164, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 14:12:51.877: INFO: Waited 727.65357ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:12:52.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-8365" for this suite. May 16 14:12:58.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:12:58.782: INFO: namespace aggregator-8365 deletion completed in 6.398358379s • [SLOW TEST:14.338 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:12:58.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9550 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9550 STEP: Creating statefulset with conflicting port in namespace statefulset-9550 STEP: Waiting until pod test-pod will start running in namespace statefulset-9550 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9550 May 16 14:13:04.983: INFO: Observed stateful pod in namespace: statefulset-9550, name: ss-0, uid: e2f6d21c-2b3f-4ffc-9cb9-9e558aac05a6, status phase: Pending. Waiting for statefulset controller to delete. May 16 14:13:05.520: INFO: Observed stateful pod in namespace: statefulset-9550, name: ss-0, uid: e2f6d21c-2b3f-4ffc-9cb9-9e558aac05a6, status phase: Failed. Waiting for statefulset controller to delete. May 16 14:13:05.528: INFO: Observed stateful pod in namespace: statefulset-9550, name: ss-0, uid: e2f6d21c-2b3f-4ffc-9cb9-9e558aac05a6, status phase: Failed. Waiting for statefulset controller to delete. May 16 14:13:05.534: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9550 STEP: Removing pod with conflicting port in namespace statefulset-9550 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9550 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 16 14:13:11.639: INFO: Deleting all statefulset in ns statefulset-9550 May 16 14:13:11.642: INFO: Scaling statefulset ss to 0 May 16 14:13:31.661: INFO: Waiting for statefulset status.replicas updated to 0 May 16 14:13:31.664: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:13:31.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9550" for this suite. May 16 14:13:37.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:13:37.807: INFO: namespace statefulset-9550 deletion completed in 6.123588333s • [SLOW TEST:39.025 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:13:37.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 16 14:13:37.835: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 16 14:13:37.843: INFO: Waiting for terminating namespaces to be deleted... May 16 14:13:37.844: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 16 14:13:37.848: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 16 14:13:37.848: INFO: Container kube-proxy ready: true, restart count 0 May 16 14:13:37.848: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 16 14:13:37.848: INFO: Container kindnet-cni ready: true, restart count 0 May 16 14:13:37.848: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 16 14:13:37.852: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 16 14:13:37.852: INFO: Container coredns ready: true, restart count 0 May 16 14:13:37.852: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 16 14:13:37.852: INFO: Container coredns ready: true, restart count 0 May 16 14:13:37.852: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 16 14:13:37.852: INFO: Container kube-proxy ready: true, restart count 0 May 16 14:13:37.852: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 16 14:13:37.852: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-565b20f0-fc1c-4c69-9d63-dce79bb31a75 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-565b20f0-fc1c-4c69-9d63-dce79bb31a75 off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-565b20f0-fc1c-4c69-9d63-dce79bb31a75 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:13:46.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5475" for this suite. May 16 14:14:04.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:14:04.148: INFO: namespace sched-pred-5475 deletion completed in 18.094012022s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:26.340 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:14:04.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments May 16 14:14:04.242: INFO: Waiting up to 5m0s for pod "client-containers-9100f186-9fb3-48dd-8919-d28778974e6d" in namespace "containers-4349" to be "success or failure" May 16 14:14:04.247: INFO: Pod "client-containers-9100f186-9fb3-48dd-8919-d28778974e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.399368ms May 16 14:14:06.276: INFO: Pod "client-containers-9100f186-9fb3-48dd-8919-d28778974e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033999262s May 16 14:14:08.362: INFO: Pod "client-containers-9100f186-9fb3-48dd-8919-d28778974e6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119506939s STEP: Saw pod success May 16 14:14:08.362: INFO: Pod "client-containers-9100f186-9fb3-48dd-8919-d28778974e6d" satisfied condition "success or failure" May 16 14:14:08.366: INFO: Trying to get logs from node iruya-worker2 pod client-containers-9100f186-9fb3-48dd-8919-d28778974e6d container test-container: STEP: delete the pod May 16 14:14:08.770: INFO: Waiting for pod client-containers-9100f186-9fb3-48dd-8919-d28778974e6d to disappear May 16 14:14:08.842: INFO: Pod client-containers-9100f186-9fb3-48dd-8919-d28778974e6d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:14:08.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4349" for this suite. May 16 14:14:14.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:14:14.957: INFO: namespace containers-4349 deletion completed in 6.110800288s • [SLOW TEST:10.808 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:14:14.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 16 14:14:19.640: INFO: Successfully updated pod "labelsupdate4282bbfc-75a2-490c-b2d0-de9c687c7620" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:14:21.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9550" for this suite. May 16 14:14:43.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:14:43.757: INFO: namespace projected-9550 deletion completed in 22.083433646s • [SLOW TEST:28.800 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:14:43.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 16 14:14:43.813: INFO: Waiting up to 5m0s for pod "pod-b9c31e77-a8d4-44c6-bc1b-510a328aabd5" in namespace "emptydir-1240" to be "success or failure" May 16 14:14:43.823: INFO: Pod "pod-b9c31e77-a8d4-44c6-bc1b-510a328aabd5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.486234ms May 16 14:14:45.836: INFO: Pod "pod-b9c31e77-a8d4-44c6-bc1b-510a328aabd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022530255s May 16 14:14:47.857: INFO: Pod "pod-b9c31e77-a8d4-44c6-bc1b-510a328aabd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043949852s STEP: Saw pod success May 16 14:14:47.857: INFO: Pod "pod-b9c31e77-a8d4-44c6-bc1b-510a328aabd5" satisfied condition "success or failure" May 16 14:14:47.860: INFO: Trying to get logs from node iruya-worker2 pod pod-b9c31e77-a8d4-44c6-bc1b-510a328aabd5 container test-container: STEP: delete the pod May 16 14:14:47.879: INFO: Waiting for pod pod-b9c31e77-a8d4-44c6-bc1b-510a328aabd5 to disappear May 16 14:14:47.883: INFO: Pod pod-b9c31e77-a8d4-44c6-bc1b-510a328aabd5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:14:47.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1240" for this suite. May 16 14:14:53.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:14:53.969: INFO: namespace emptydir-1240 deletion completed in 6.0830761s • [SLOW TEST:10.212 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:14:53.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2272 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 16 14:14:54.081: INFO: Found 0 stateful pods, waiting for 3 May 16 14:15:04.086: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 16 14:15:04.086: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 16 14:15:04.086: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 16 14:15:04.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2272 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 16 14:15:04.494: INFO: stderr: "I0516 14:15:04.365728 2913 log.go:172] (0xc000a72420) (0xc0005eab40) Create stream\nI0516 14:15:04.365759 2913 log.go:172] (0xc000a72420) (0xc0005eab40) Stream added, broadcasting: 1\nI0516 14:15:04.367768 2913 log.go:172] (0xc000a72420) Reply frame received for 1\nI0516 14:15:04.367811 2913 log.go:172] (0xc000a72420) (0xc0005eabe0) Create stream\nI0516 14:15:04.367823 2913 log.go:172] (0xc000a72420) (0xc0005eabe0) Stream added, broadcasting: 3\nI0516 14:15:04.368774 2913 log.go:172] (0xc000a72420) Reply frame received for 3\nI0516 14:15:04.368806 2913 log.go:172] (0xc000a72420) (0xc000516000) Create stream\nI0516 14:15:04.368818 2913 log.go:172] (0xc000a72420) (0xc000516000) Stream added, broadcasting: 5\nI0516 14:15:04.370488 2913 log.go:172] (0xc000a72420) Reply frame received for 5\nI0516 14:15:04.451787 2913 log.go:172] (0xc000a72420) Data frame received for 5\nI0516 14:15:04.451810 2913 log.go:172] (0xc000516000) (5) Data frame handling\nI0516 14:15:04.451832 2913 log.go:172] (0xc000516000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0516 14:15:04.485872 2913 log.go:172] (0xc000a72420) Data frame received for 5\nI0516 14:15:04.485901 2913 log.go:172] (0xc000a72420) Data frame received for 3\nI0516 14:15:04.485936 2913 log.go:172] (0xc0005eabe0) (3) Data frame handling\nI0516 14:15:04.485949 2913 log.go:172] (0xc0005eabe0) (3) Data frame sent\nI0516 14:15:04.485961 2913 log.go:172] (0xc000a72420) Data frame received for 3\nI0516 14:15:04.485980 2913 log.go:172] (0xc0005eabe0) (3) Data frame handling\nI0516 14:15:04.486009 2913 log.go:172] (0xc000516000) (5) Data frame handling\nI0516 14:15:04.488074 2913 log.go:172] (0xc000a72420) Data frame received for 1\nI0516 14:15:04.488093 2913 log.go:172] (0xc0005eab40) (1) Data frame handling\nI0516 14:15:04.488112 2913 log.go:172] (0xc0005eab40) (1) Data frame sent\nI0516 14:15:04.488126 2913 log.go:172] (0xc000a72420) (0xc0005eab40) Stream removed, broadcasting: 1\nI0516 14:15:04.488419 2913 log.go:172] (0xc000a72420) Go away received\nI0516 14:15:04.488639 2913 log.go:172] (0xc000a72420) (0xc0005eab40) Stream removed, broadcasting: 1\nI0516 14:15:04.488663 2913 log.go:172] (0xc000a72420) (0xc0005eabe0) Stream removed, broadcasting: 3\nI0516 14:15:04.488676 2913 log.go:172] (0xc000a72420) (0xc000516000) Stream removed, broadcasting: 5\n" May 16 14:15:04.494: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 16 14:15:04.494: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 16 14:15:14.525: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 16 14:15:24.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2272 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 14:15:24.858: INFO: stderr: "I0516 14:15:24.713731 2935 log.go:172] (0xc000a2c630) (0xc0006148c0) Create stream\nI0516 14:15:24.713777 2935 log.go:172] (0xc000a2c630) (0xc0006148c0) Stream added, broadcasting: 1\nI0516 14:15:24.715942 2935 log.go:172] (0xc000a2c630) Reply frame received for 1\nI0516 14:15:24.716020 2935 log.go:172] (0xc000a2c630) (0xc000840000) Create stream\nI0516 14:15:24.716076 2935 log.go:172] (0xc000a2c630) (0xc000840000) Stream added, broadcasting: 3\nI0516 14:15:24.717786 2935 log.go:172] (0xc000a2c630) Reply frame received for 3\nI0516 14:15:24.717818 2935 log.go:172] (0xc000a2c630) (0xc0008400a0) Create stream\nI0516 14:15:24.717826 2935 log.go:172] (0xc000a2c630) (0xc0008400a0) Stream added, broadcasting: 5\nI0516 14:15:24.718908 2935 log.go:172] (0xc000a2c630) Reply frame received for 5\nI0516 14:15:24.852084 2935 log.go:172] (0xc000a2c630) Data frame received for 3\nI0516 14:15:24.852204 2935 log.go:172] (0xc000840000) (3) Data frame handling\nI0516 14:15:24.852247 2935 log.go:172] (0xc000840000) (3) Data frame sent\nI0516 14:15:24.852261 2935 log.go:172] (0xc000a2c630) Data frame received for 3\nI0516 14:15:24.852272 2935 log.go:172] (0xc000840000) (3) Data frame handling\nI0516 14:15:24.852345 2935 log.go:172] (0xc000a2c630) Data frame received for 5\nI0516 14:15:24.852357 2935 log.go:172] (0xc0008400a0) (5) Data frame handling\nI0516 14:15:24.852373 2935 log.go:172] (0xc0008400a0) (5) Data frame sent\nI0516 14:15:24.852380 2935 log.go:172] (0xc000a2c630) Data frame received for 5\nI0516 14:15:24.852385 2935 log.go:172] (0xc0008400a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0516 14:15:24.854188 2935 log.go:172] (0xc000a2c630) Data frame received for 1\nI0516 14:15:24.854315 2935 log.go:172] (0xc0006148c0) (1) Data frame handling\nI0516 14:15:24.854390 2935 log.go:172] (0xc0006148c0) (1) Data frame sent\nI0516 14:15:24.854428 2935 log.go:172] (0xc000a2c630) (0xc0006148c0) Stream removed, broadcasting: 1\nI0516 14:15:24.854453 2935 log.go:172] (0xc000a2c630) Go away received\nI0516 14:15:24.854762 2935 log.go:172] (0xc000a2c630) (0xc0006148c0) Stream removed, broadcasting: 1\nI0516 14:15:24.854778 2935 log.go:172] (0xc000a2c630) (0xc000840000) Stream removed, broadcasting: 3\nI0516 14:15:24.854789 2935 log.go:172] (0xc000a2c630) (0xc0008400a0) Stream removed, broadcasting: 5\n" May 16 14:15:24.858: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 16 14:15:24.858: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 16 14:15:34.888: INFO: Waiting for StatefulSet statefulset-2272/ss2 to complete update May 16 14:15:34.888: INFO: Waiting for Pod statefulset-2272/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 16 14:15:34.888: INFO: Waiting for Pod statefulset-2272/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 16 14:15:44.908: INFO: Waiting for StatefulSet statefulset-2272/ss2 to complete update May 16 14:15:44.909: INFO: Waiting for Pod statefulset-2272/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 16 14:15:54.895: INFO: Waiting for StatefulSet statefulset-2272/ss2 to complete update STEP: Rolling back to a previous revision May 16 14:16:04.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2272 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 16 14:16:05.144: INFO: stderr: "I0516 14:16:05.032370 2955 log.go:172] (0xc000964630) (0xc000686a00) Create stream\nI0516 14:16:05.032436 2955 log.go:172] (0xc000964630) (0xc000686a00) Stream added, broadcasting: 1\nI0516 14:16:05.036404 2955 log.go:172] (0xc000964630) Reply frame received for 1\nI0516 14:16:05.036440 2955 log.go:172] (0xc000964630) (0xc000686140) Create stream\nI0516 14:16:05.036456 2955 log.go:172] (0xc000964630) (0xc000686140) Stream added, broadcasting: 3\nI0516 14:16:05.037440 2955 log.go:172] (0xc000964630) Reply frame received for 3\nI0516 14:16:05.037480 2955 log.go:172] (0xc000964630) (0xc00002e000) Create stream\nI0516 14:16:05.037491 2955 log.go:172] (0xc000964630) (0xc00002e000) Stream added, broadcasting: 5\nI0516 14:16:05.038239 2955 log.go:172] (0xc000964630) Reply frame received for 5\nI0516 14:16:05.106282 2955 log.go:172] (0xc000964630) Data frame received for 5\nI0516 14:16:05.106312 2955 log.go:172] (0xc00002e000) (5) Data frame handling\nI0516 14:16:05.106334 2955 log.go:172] (0xc00002e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0516 14:16:05.135657 2955 log.go:172] (0xc000964630) Data frame received for 3\nI0516 14:16:05.135706 2955 log.go:172] (0xc000686140) (3) Data frame handling\nI0516 14:16:05.135750 2955 log.go:172] (0xc000686140) (3) Data frame sent\nI0516 14:16:05.135781 2955 log.go:172] (0xc000964630) Data frame received for 5\nI0516 14:16:05.135808 2955 log.go:172] (0xc00002e000) (5) Data frame handling\nI0516 14:16:05.136127 2955 log.go:172] (0xc000964630) Data frame received for 3\nI0516 14:16:05.136174 2955 log.go:172] (0xc000686140) (3) Data frame handling\nI0516 14:16:05.138721 2955 log.go:172] (0xc000964630) Data frame received for 1\nI0516 14:16:05.138747 2955 log.go:172] (0xc000686a00) (1) Data frame handling\nI0516 14:16:05.138777 2955 log.go:172] (0xc000686a00) (1) Data frame sent\nI0516 14:16:05.138798 2955 log.go:172] (0xc000964630) (0xc000686a00) Stream removed, broadcasting: 1\nI0516 14:16:05.138823 2955 log.go:172] (0xc000964630) Go away received\nI0516 14:16:05.139161 2955 log.go:172] (0xc000964630) (0xc000686a00) Stream removed, broadcasting: 1\nI0516 14:16:05.139182 2955 log.go:172] (0xc000964630) (0xc000686140) Stream removed, broadcasting: 3\nI0516 14:16:05.139192 2955 log.go:172] (0xc000964630) (0xc00002e000) Stream removed, broadcasting: 5\n" May 16 14:16:05.144: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 16 14:16:05.144: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 16 14:16:15.185: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 16 14:16:25.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2272 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 14:16:25.402: INFO: stderr: "I0516 14:16:25.337603 2976 log.go:172] (0xc0009366e0) (0xc0007da8c0) Create stream\nI0516 14:16:25.337672 2976 log.go:172] (0xc0009366e0) (0xc0007da8c0) Stream added, broadcasting: 1\nI0516 14:16:25.341777 2976 log.go:172] (0xc0009366e0) Reply frame received for 1\nI0516 14:16:25.341838 2976 log.go:172] (0xc0009366e0) (0xc0007da000) Create stream\nI0516 14:16:25.341856 2976 log.go:172] (0xc0009366e0) (0xc0007da000) Stream added, broadcasting: 3\nI0516 14:16:25.342680 2976 log.go:172] (0xc0009366e0) Reply frame received for 3\nI0516 14:16:25.342709 2976 log.go:172] (0xc0009366e0) (0xc0006d00a0) Create stream\nI0516 14:16:25.342719 2976 log.go:172] (0xc0009366e0) (0xc0006d00a0) Stream added, broadcasting: 5\nI0516 14:16:25.343421 2976 log.go:172] (0xc0009366e0) Reply frame received for 5\nI0516 14:16:25.396738 2976 log.go:172] (0xc0009366e0) Data frame received for 5\nI0516 14:16:25.396772 2976 log.go:172] (0xc0006d00a0) (5) Data frame handling\nI0516 14:16:25.396781 2976 log.go:172] (0xc0006d00a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0516 14:16:25.396789 2976 log.go:172] (0xc0009366e0) Data frame received for 5\nI0516 14:16:25.396823 2976 log.go:172] (0xc0006d00a0) (5) Data frame handling\nI0516 14:16:25.396841 2976 log.go:172] (0xc0009366e0) Data frame received for 3\nI0516 14:16:25.396848 2976 log.go:172] (0xc0007da000) (3) Data frame handling\nI0516 14:16:25.396884 2976 log.go:172] (0xc0007da000) (3) Data frame sent\nI0516 14:16:25.396896 2976 log.go:172] (0xc0009366e0) Data frame received for 3\nI0516 14:16:25.396901 2976 log.go:172] (0xc0007da000) (3) Data frame handling\nI0516 14:16:25.398029 2976 log.go:172] (0xc0009366e0) Data frame received for 1\nI0516 14:16:25.398052 2976 log.go:172] (0xc0007da8c0) (1) Data frame handling\nI0516 14:16:25.398069 2976 log.go:172] (0xc0007da8c0) (1) Data frame sent\nI0516 14:16:25.398078 2976 log.go:172] (0xc0009366e0) (0xc0007da8c0) Stream removed, broadcasting: 1\nI0516 14:16:25.398094 2976 log.go:172] (0xc0009366e0) Go away received\nI0516 14:16:25.398375 2976 log.go:172] (0xc0009366e0) (0xc0007da8c0) Stream removed, broadcasting: 1\nI0516 14:16:25.398391 2976 log.go:172] (0xc0009366e0) (0xc0007da000) Stream removed, broadcasting: 3\nI0516 14:16:25.398399 2976 log.go:172] (0xc0009366e0) (0xc0006d00a0) Stream removed, broadcasting: 5\n" May 16 14:16:25.402: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 16 14:16:25.402: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 16 14:16:35.422: INFO: Waiting for StatefulSet statefulset-2272/ss2 to complete update May 16 14:16:35.422: INFO: Waiting for Pod statefulset-2272/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 16 14:16:35.422: INFO: Waiting for Pod statefulset-2272/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 16 14:16:45.430: INFO: Waiting for StatefulSet statefulset-2272/ss2 to complete update May 16 14:16:45.430: INFO: Waiting for Pod statefulset-2272/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 16 14:16:55.444: INFO: Deleting all statefulset in ns statefulset-2272 May 16 14:16:55.446: INFO: Scaling statefulset ss2 to 0 May 16 14:17:25.480: INFO: Waiting for statefulset status.replicas updated to 0 May 16 14:17:25.483: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:17:25.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2272" for this suite. May 16 14:17:33.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:17:33.663: INFO: namespace statefulset-2272 deletion completed in 8.130288095s • [SLOW TEST:159.693 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:17:33.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3287 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 16 14:17:33.757: INFO: Found 0 stateful pods, waiting for 3 May 16 14:17:43.762: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 16 14:17:43.762: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 16 14:17:43.762: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 16 14:17:53.775: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 16 14:17:53.775: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 16 14:17:53.775: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 16 14:17:53.798: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 16 14:18:03.834: INFO: Updating stateful set ss2 May 16 14:18:03.909: INFO: Waiting for Pod statefulset-3287/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 16 14:18:14.069: INFO: Found 2 stateful pods, waiting for 3 May 16 14:18:24.074: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 16 14:18:24.074: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 16 14:18:24.074: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 16 14:18:24.098: INFO: Updating stateful set ss2 May 16 14:18:24.115: INFO: Waiting for Pod statefulset-3287/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 16 14:18:34.139: INFO: Updating stateful set ss2 May 16 14:18:34.223: INFO: Waiting for StatefulSet statefulset-3287/ss2 to complete update May 16 14:18:34.223: INFO: Waiting for Pod statefulset-3287/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 16 14:18:44.231: INFO: Deleting all statefulset in ns statefulset-3287 May 16 14:18:44.233: INFO: Scaling statefulset ss2 to 0 May 16 14:19:04.266: INFO: Waiting for statefulset status.replicas updated to 0 May 16 14:19:04.268: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:19:04.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3287" for this suite. May 16 14:19:10.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:19:10.370: INFO: namespace statefulset-3287 deletion completed in 6.088218125s • [SLOW TEST:96.706 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:19:10.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all May 16 14:19:10.430: INFO: Waiting up to 5m0s for pod "client-containers-5e5a1147-a53e-4ef2-ba47-e32ac6557512" in namespace "containers-1252" to be "success or failure" May 16 14:19:10.446: INFO: Pod "client-containers-5e5a1147-a53e-4ef2-ba47-e32ac6557512": Phase="Pending", Reason="", readiness=false. Elapsed: 15.831658ms May 16 14:19:12.451: INFO: Pod "client-containers-5e5a1147-a53e-4ef2-ba47-e32ac6557512": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020709729s May 16 14:19:14.455: INFO: Pod "client-containers-5e5a1147-a53e-4ef2-ba47-e32ac6557512": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024621445s STEP: Saw pod success May 16 14:19:14.455: INFO: Pod "client-containers-5e5a1147-a53e-4ef2-ba47-e32ac6557512" satisfied condition "success or failure" May 16 14:19:14.457: INFO: Trying to get logs from node iruya-worker pod client-containers-5e5a1147-a53e-4ef2-ba47-e32ac6557512 container test-container: STEP: delete the pod May 16 14:19:14.584: INFO: Waiting for pod client-containers-5e5a1147-a53e-4ef2-ba47-e32ac6557512 to disappear May 16 14:19:14.837: INFO: Pod client-containers-5e5a1147-a53e-4ef2-ba47-e32ac6557512 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:19:14.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1252" for this suite. May 16 14:19:20.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:19:20.934: INFO: namespace containers-1252 deletion completed in 6.093600082s • [SLOW TEST:10.563 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:19:20.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3049 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-3049 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3049 May 16 14:19:21.026: INFO: Found 0 stateful pods, waiting for 1 May 16 14:19:31.031: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 16 14:19:31.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3049 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 16 14:19:31.336: INFO: stderr: "I0516 14:19:31.166772 2995 log.go:172] (0xc00095e420) (0xc0003dc6e0) Create stream\nI0516 14:19:31.166827 2995 log.go:172] (0xc00095e420) (0xc0003dc6e0) Stream added, broadcasting: 1\nI0516 14:19:31.170703 2995 log.go:172] (0xc00095e420) Reply frame received for 1\nI0516 14:19:31.170794 2995 log.go:172] (0xc00095e420) (0xc00066a280) Create stream\nI0516 14:19:31.170822 2995 log.go:172] (0xc00095e420) (0xc00066a280) Stream added, broadcasting: 3\nI0516 14:19:31.171718 2995 log.go:172] (0xc00095e420) Reply frame received for 3\nI0516 14:19:31.171792 2995 log.go:172] (0xc00095e420) (0xc0003dc000) Create stream\nI0516 14:19:31.171824 2995 log.go:172] (0xc00095e420) (0xc0003dc000) Stream added, broadcasting: 5\nI0516 14:19:31.172694 2995 log.go:172] (0xc00095e420) Reply frame received for 5\nI0516 14:19:31.267375 2995 log.go:172] (0xc00095e420) Data frame received for 5\nI0516 14:19:31.267402 2995 log.go:172] (0xc0003dc000) (5) Data frame handling\nI0516 14:19:31.267418 2995 log.go:172] (0xc0003dc000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0516 14:19:31.326730 2995 log.go:172] (0xc00095e420) Data frame received for 3\nI0516 14:19:31.326782 2995 log.go:172] (0xc00066a280) (3) Data frame handling\nI0516 14:19:31.326803 2995 log.go:172] (0xc00066a280) (3) Data frame sent\nI0516 14:19:31.326845 2995 log.go:172] (0xc00095e420) Data frame received for 5\nI0516 14:19:31.326864 2995 log.go:172] (0xc0003dc000) (5) Data frame handling\nI0516 14:19:31.327189 2995 log.go:172] (0xc00095e420) Data frame received for 3\nI0516 14:19:31.327213 2995 log.go:172] (0xc00066a280) (3) Data frame handling\nI0516 14:19:31.328945 2995 log.go:172] (0xc00095e420) Data frame received for 1\nI0516 14:19:31.328971 2995 log.go:172] (0xc0003dc6e0) (1) Data frame handling\nI0516 14:19:31.328989 2995 log.go:172] (0xc0003dc6e0) (1) Data frame sent\nI0516 14:19:31.329355 2995 log.go:172] (0xc00095e420) (0xc0003dc6e0) Stream removed, broadcasting: 1\nI0516 14:19:31.329517 2995 log.go:172] (0xc00095e420) Go away received\nI0516 14:19:31.329829 2995 log.go:172] (0xc00095e420) (0xc0003dc6e0) Stream removed, broadcasting: 1\nI0516 14:19:31.329850 2995 log.go:172] (0xc00095e420) (0xc00066a280) Stream removed, broadcasting: 3\nI0516 14:19:31.329862 2995 log.go:172] (0xc00095e420) (0xc0003dc000) Stream removed, broadcasting: 5\n" May 16 14:19:31.336: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 16 14:19:31.336: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 16 14:19:31.340: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 16 14:19:41.346: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 16 14:19:41.346: INFO: Waiting for statefulset status.replicas updated to 0 May 16 14:19:41.431: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999229s May 16 14:19:42.435: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.924999136s May 16 14:19:43.440: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.920790864s May 16 14:19:44.444: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.915955537s May 16 14:19:45.449: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.911306356s May 16 14:19:46.458: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.90684026s May 16 14:19:47.462: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.89789427s May 16 14:19:48.466: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.894264531s May 16 14:19:49.471: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.889762239s May 16 14:19:50.476: INFO: Verifying statefulset ss doesn't scale past 1 for another 884.62212ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3049 May 16 14:19:51.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3049 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 14:19:51.713: INFO: stderr: "I0516 14:19:51.600927 3016 log.go:172] (0xc000724a50) (0xc000954640) Create stream\nI0516 14:19:51.600974 3016 log.go:172] (0xc000724a50) (0xc000954640) Stream added, broadcasting: 1\nI0516 14:19:51.603206 3016 log.go:172] (0xc000724a50) Reply frame received for 1\nI0516 14:19:51.603271 3016 log.go:172] (0xc000724a50) (0xc0006ca1e0) Create stream\nI0516 14:19:51.603349 3016 log.go:172] (0xc000724a50) (0xc0006ca1e0) Stream added, broadcasting: 3\nI0516 14:19:51.604589 3016 log.go:172] (0xc000724a50) Reply frame received for 3\nI0516 14:19:51.604614 3016 log.go:172] (0xc000724a50) (0xc000522000) Create stream\nI0516 14:19:51.604623 3016 log.go:172] (0xc000724a50) (0xc000522000) Stream added, broadcasting: 5\nI0516 14:19:51.605689 3016 log.go:172] (0xc000724a50) Reply frame received for 5\nI0516 14:19:51.702489 3016 log.go:172] (0xc000724a50) Data frame received for 5\nI0516 14:19:51.702521 3016 log.go:172] (0xc000522000) (5) Data frame handling\nI0516 14:19:51.702552 3016 log.go:172] (0xc000522000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0516 14:19:51.703298 3016 log.go:172] (0xc000724a50) Data frame received for 3\nI0516 14:19:51.703334 3016 log.go:172] (0xc0006ca1e0) (3) Data frame handling\nI0516 14:19:51.703353 3016 log.go:172] (0xc0006ca1e0) (3) Data frame sent\nI0516 14:19:51.703375 3016 log.go:172] (0xc000724a50) Data frame received for 3\nI0516 14:19:51.703388 3016 log.go:172] (0xc0006ca1e0) (3) Data frame handling\nI0516 14:19:51.703588 3016 log.go:172] (0xc000724a50) Data frame received for 5\nI0516 14:19:51.703615 3016 log.go:172] (0xc000522000) (5) Data frame handling\nI0516 14:19:51.706106 3016 log.go:172] (0xc000724a50) Data frame received for 1\nI0516 14:19:51.706132 3016 log.go:172] (0xc000954640) (1) Data frame handling\nI0516 14:19:51.706144 3016 log.go:172] (0xc000954640) (1) Data frame sent\nI0516 14:19:51.706158 3016 log.go:172] (0xc000724a50) (0xc000954640) Stream removed, broadcasting: 1\nI0516 14:19:51.706181 3016 log.go:172] (0xc000724a50) Go away received\nI0516 14:19:51.706670 3016 log.go:172] (0xc000724a50) (0xc000954640) Stream removed, broadcasting: 1\nI0516 14:19:51.706726 3016 log.go:172] (0xc000724a50) (0xc0006ca1e0) Stream removed, broadcasting: 3\nI0516 14:19:51.706760 3016 log.go:172] (0xc000724a50) (0xc000522000) Stream removed, broadcasting: 5\n" May 16 14:19:51.713: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 16 14:19:51.713: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 16 14:19:51.716: INFO: Found 1 stateful pods, waiting for 3 May 16 14:20:01.721: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 16 14:20:01.721: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 16 14:20:01.721: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 16 14:20:01.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3049 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 16 14:20:01.977: INFO: stderr: "I0516 14:20:01.871733 3034 log.go:172] (0xc000118dc0) (0xc0005b0820) Create stream\nI0516 14:20:01.871792 3034 log.go:172] (0xc000118dc0) (0xc0005b0820) Stream added, broadcasting: 1\nI0516 14:20:01.874292 3034 log.go:172] (0xc000118dc0) Reply frame received for 1\nI0516 14:20:01.874346 3034 log.go:172] (0xc000118dc0) (0xc00085e000) Create stream\nI0516 14:20:01.874366 3034 log.go:172] (0xc000118dc0) (0xc00085e000) Stream added, broadcasting: 3\nI0516 14:20:01.875335 3034 log.go:172] (0xc000118dc0) Reply frame received for 3\nI0516 14:20:01.875385 3034 log.go:172] (0xc000118dc0) (0xc0005b08c0) Create stream\nI0516 14:20:01.875409 3034 log.go:172] (0xc000118dc0) (0xc0005b08c0) Stream added, broadcasting: 5\nI0516 14:20:01.876509 3034 log.go:172] (0xc000118dc0) Reply frame received for 5\nI0516 14:20:01.970667 3034 log.go:172] (0xc000118dc0) Data frame received for 5\nI0516 14:20:01.970724 3034 log.go:172] (0xc0005b08c0) (5) Data frame handling\nI0516 14:20:01.970740 3034 log.go:172] (0xc0005b08c0) (5) Data frame sent\nI0516 14:20:01.970751 3034 log.go:172] (0xc000118dc0) Data frame received for 5\nI0516 14:20:01.970761 3034 log.go:172] (0xc0005b08c0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0516 14:20:01.970820 3034 log.go:172] (0xc000118dc0) Data frame received for 3\nI0516 14:20:01.970862 3034 log.go:172] (0xc00085e000) (3) Data frame handling\nI0516 14:20:01.970902 3034 log.go:172] (0xc00085e000) (3) Data frame sent\nI0516 14:20:01.970929 3034 log.go:172] (0xc000118dc0) Data frame received for 3\nI0516 14:20:01.970961 3034 log.go:172] (0xc00085e000) (3) Data frame handling\nI0516 14:20:01.972278 3034 log.go:172] (0xc000118dc0) Data frame received for 1\nI0516 14:20:01.972387 3034 log.go:172] (0xc0005b0820) (1) Data frame handling\nI0516 14:20:01.972512 3034 log.go:172] (0xc0005b0820) (1) Data frame sent\nI0516 14:20:01.972721 3034 log.go:172] (0xc000118dc0) (0xc0005b0820) Stream removed, broadcasting: 1\nI0516 14:20:01.972773 3034 log.go:172] (0xc000118dc0) Go away received\nI0516 14:20:01.973522 3034 log.go:172] (0xc000118dc0) (0xc0005b0820) Stream removed, broadcasting: 1\nI0516 14:20:01.973565 3034 log.go:172] (0xc000118dc0) (0xc00085e000) Stream removed, broadcasting: 3\nI0516 14:20:01.973585 3034 log.go:172] (0xc000118dc0) (0xc0005b08c0) Stream removed, broadcasting: 5\n" May 16 14:20:01.977: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 16 14:20:01.977: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 16 14:20:01.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3049 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 16 14:20:02.242: INFO: stderr: "I0516 14:20:02.135077 3054 log.go:172] (0xc0005c62c0) (0xc000916640) Create stream\nI0516 14:20:02.135148 3054 log.go:172] (0xc0005c62c0) (0xc000916640) Stream added, broadcasting: 1\nI0516 14:20:02.138216 3054 log.go:172] (0xc0005c62c0) Reply frame received for 1\nI0516 14:20:02.138275 3054 log.go:172] (0xc0005c62c0) (0xc00094a000) Create stream\nI0516 14:20:02.138296 3054 log.go:172] (0xc0005c62c0) (0xc00094a000) Stream added, broadcasting: 3\nI0516 14:20:02.139219 3054 log.go:172] (0xc0005c62c0) Reply frame received for 3\nI0516 14:20:02.139281 3054 log.go:172] (0xc0005c62c0) (0xc000120320) Create stream\nI0516 14:20:02.139304 3054 log.go:172] (0xc0005c62c0) (0xc000120320) Stream added, broadcasting: 5\nI0516 14:20:02.140448 3054 log.go:172] (0xc0005c62c0) Reply frame received for 5\nI0516 14:20:02.197548 3054 log.go:172] (0xc0005c62c0) Data frame received for 5\nI0516 14:20:02.197578 3054 log.go:172] (0xc000120320) (5) Data frame handling\nI0516 14:20:02.197602 3054 log.go:172] (0xc000120320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0516 14:20:02.233926 3054 log.go:172] (0xc0005c62c0) Data frame received for 5\nI0516 14:20:02.233945 3054 log.go:172] (0xc000120320) (5) Data frame handling\nI0516 14:20:02.233971 3054 log.go:172] (0xc0005c62c0) Data frame received for 3\nI0516 14:20:02.233993 3054 log.go:172] (0xc00094a000) (3) Data frame handling\nI0516 14:20:02.234013 3054 log.go:172] (0xc00094a000) (3) Data frame sent\nI0516 14:20:02.234033 3054 log.go:172] (0xc0005c62c0) Data frame received for 3\nI0516 14:20:02.234043 3054 log.go:172] (0xc00094a000) (3) Data frame handling\nI0516 14:20:02.235943 3054 log.go:172] (0xc0005c62c0) Data frame received for 1\nI0516 14:20:02.235975 3054 log.go:172] (0xc000916640) (1) Data frame handling\nI0516 14:20:02.235994 3054 log.go:172] (0xc000916640) (1) Data frame sent\nI0516 14:20:02.236022 3054 log.go:172] (0xc0005c62c0) (0xc000916640) Stream removed, broadcasting: 1\nI0516 14:20:02.236043 3054 log.go:172] (0xc0005c62c0) Go away received\nI0516 14:20:02.236352 3054 log.go:172] (0xc0005c62c0) (0xc000916640) Stream removed, broadcasting: 1\nI0516 14:20:02.236369 3054 log.go:172] (0xc0005c62c0) (0xc00094a000) Stream removed, broadcasting: 3\nI0516 14:20:02.236378 3054 log.go:172] (0xc0005c62c0) (0xc000120320) Stream removed, broadcasting: 5\n" May 16 14:20:02.243: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 16 14:20:02.243: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 16 14:20:02.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3049 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 16 14:20:02.479: INFO: stderr: "I0516 14:20:02.374325 3076 log.go:172] (0xc000116840) (0xc0004f66e0) Create stream\nI0516 14:20:02.374366 3076 log.go:172] (0xc000116840) (0xc0004f66e0) Stream added, broadcasting: 1\nI0516 14:20:02.376277 3076 log.go:172] (0xc000116840) Reply frame received for 1\nI0516 14:20:02.376331 3076 log.go:172] (0xc000116840) (0xc0006c6000) Create stream\nI0516 14:20:02.376361 3076 log.go:172] (0xc000116840) (0xc0006c6000) Stream added, broadcasting: 3\nI0516 14:20:02.377293 3076 log.go:172] (0xc000116840) Reply frame received for 3\nI0516 14:20:02.377324 3076 log.go:172] (0xc000116840) (0xc0004f6780) Create stream\nI0516 14:20:02.377336 3076 log.go:172] (0xc000116840) (0xc0004f6780) Stream added, broadcasting: 5\nI0516 14:20:02.378160 3076 log.go:172] (0xc000116840) Reply frame received for 5\nI0516 14:20:02.442487 3076 log.go:172] (0xc000116840) Data frame received for 5\nI0516 14:20:02.442526 3076 log.go:172] (0xc0004f6780) (5) Data frame handling\nI0516 14:20:02.442549 3076 log.go:172] (0xc0004f6780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0516 14:20:02.470901 3076 log.go:172] (0xc000116840) Data frame received for 3\nI0516 14:20:02.471027 3076 log.go:172] (0xc0006c6000) (3) Data frame handling\nI0516 14:20:02.471130 3076 log.go:172] (0xc0006c6000) (3) Data frame sent\nI0516 14:20:02.471415 3076 log.go:172] (0xc000116840) Data frame received for 3\nI0516 14:20:02.471434 3076 log.go:172] (0xc0006c6000) (3) Data frame handling\nI0516 14:20:02.471492 3076 log.go:172] (0xc000116840) Data frame received for 5\nI0516 14:20:02.471509 3076 log.go:172] (0xc0004f6780) (5) Data frame handling\nI0516 14:20:02.473884 3076 log.go:172] (0xc000116840) Data frame received for 1\nI0516 14:20:02.473920 3076 log.go:172] (0xc0004f66e0) (1) Data frame handling\nI0516 14:20:02.473946 3076 log.go:172] (0xc0004f66e0) (1) Data frame sent\nI0516 14:20:02.473982 3076 log.go:172] (0xc000116840) (0xc0004f66e0) Stream removed, broadcasting: 1\nI0516 14:20:02.474007 3076 log.go:172] (0xc000116840) Go away received\nI0516 14:20:02.474457 3076 log.go:172] (0xc000116840) (0xc0004f66e0) Stream removed, broadcasting: 1\nI0516 14:20:02.474477 3076 log.go:172] (0xc000116840) (0xc0006c6000) Stream removed, broadcasting: 3\nI0516 14:20:02.474486 3076 log.go:172] (0xc000116840) (0xc0004f6780) Stream removed, broadcasting: 5\n" May 16 14:20:02.479: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 16 14:20:02.479: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 16 14:20:02.479: INFO: Waiting for statefulset status.replicas updated to 0 May 16 14:20:02.483: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 16 14:20:12.492: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 16 14:20:12.492: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 16 14:20:12.492: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 16 14:20:12.519: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999313s May 16 14:20:13.524: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.979632657s May 16 14:20:14.530: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.974060745s May 16 14:20:15.540: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.968690046s May 16 14:20:16.545: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.958627467s May 16 14:20:17.551: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.953276435s May 16 14:20:18.557: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.947604836s May 16 14:20:19.562: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.941638397s May 16 14:20:20.568: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.936094038s May 16 14:20:21.574: INFO: Verifying statefulset ss doesn't scale past 3 for another 930.467107ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3049 May 16 14:20:22.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3049 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 14:20:22.828: INFO: stderr: "I0516 14:20:22.720996 3095 log.go:172] (0xc000a742c0) (0xc000a726e0) Create stream\nI0516 14:20:22.721046 3095 log.go:172] (0xc000a742c0) (0xc000a726e0) Stream added, broadcasting: 1\nI0516 14:20:22.723169 3095 log.go:172] (0xc000a742c0) Reply frame received for 1\nI0516 14:20:22.723222 3095 log.go:172] (0xc000a742c0) (0xc0006821e0) Create stream\nI0516 14:20:22.723245 3095 log.go:172] (0xc000a742c0) (0xc0006821e0) Stream added, broadcasting: 3\nI0516 14:20:22.724034 3095 log.go:172] (0xc000a742c0) Reply frame received for 3\nI0516 14:20:22.724061 3095 log.go:172] (0xc000a742c0) (0xc000682280) Create stream\nI0516 14:20:22.724071 3095 log.go:172] (0xc000a742c0) (0xc000682280) Stream added, broadcasting: 5\nI0516 14:20:22.724848 3095 log.go:172] (0xc000a742c0) Reply frame received for 5\nI0516 14:20:22.822156 3095 log.go:172] (0xc000a742c0) Data frame received for 3\nI0516 14:20:22.822190 3095 log.go:172] (0xc0006821e0) (3) Data frame handling\nI0516 14:20:22.822209 3095 log.go:172] (0xc0006821e0) (3) Data frame sent\nI0516 14:20:22.822227 3095 log.go:172] (0xc000a742c0) Data frame received for 3\nI0516 14:20:22.822237 3095 log.go:172] (0xc0006821e0) (3) Data frame handling\nI0516 14:20:22.822271 3095 log.go:172] (0xc000a742c0) Data frame received for 5\nI0516 14:20:22.822287 3095 log.go:172] (0xc000682280) (5) Data frame handling\nI0516 14:20:22.822301 3095 log.go:172] (0xc000682280) (5) Data frame sent\nI0516 14:20:22.822309 3095 log.go:172] (0xc000a742c0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0516 14:20:22.822315 3095 log.go:172] (0xc000682280) (5) Data frame handling\nI0516 14:20:22.824124 3095 log.go:172] (0xc000a742c0) Data frame received for 1\nI0516 14:20:22.824147 3095 log.go:172] (0xc000a726e0) (1) Data frame handling\nI0516 14:20:22.824163 3095 log.go:172] (0xc000a726e0) (1) Data frame sent\nI0516 14:20:22.824188 3095 log.go:172] (0xc000a742c0) (0xc000a726e0) Stream removed, broadcasting: 1\nI0516 14:20:22.824240 3095 log.go:172] (0xc000a742c0) Go away received\nI0516 14:20:22.824494 3095 log.go:172] (0xc000a742c0) (0xc000a726e0) Stream removed, broadcasting: 1\nI0516 14:20:22.824516 3095 log.go:172] (0xc000a742c0) (0xc0006821e0) Stream removed, broadcasting: 3\nI0516 14:20:22.824526 3095 log.go:172] (0xc000a742c0) (0xc000682280) Stream removed, broadcasting: 5\n" May 16 14:20:22.828: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 16 14:20:22.828: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 16 14:20:22.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3049 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 14:20:23.029: INFO: stderr: "I0516 14:20:22.953561 3116 log.go:172] (0xc000a2e420) (0xc00082c640) Create stream\nI0516 14:20:22.953611 3116 log.go:172] (0xc000a2e420) (0xc00082c640) Stream added, broadcasting: 1\nI0516 14:20:22.956345 3116 log.go:172] (0xc000a2e420) Reply frame received for 1\nI0516 14:20:22.956396 3116 log.go:172] (0xc000a2e420) (0xc000a06000) Create stream\nI0516 14:20:22.956410 3116 log.go:172] (0xc000a2e420) (0xc000a06000) Stream added, broadcasting: 3\nI0516 14:20:22.957750 3116 log.go:172] (0xc000a2e420) Reply frame received for 3\nI0516 14:20:22.957798 3116 log.go:172] (0xc000a2e420) (0xc00082c6e0) Create stream\nI0516 14:20:22.957811 3116 log.go:172] (0xc000a2e420) (0xc00082c6e0) Stream added, broadcasting: 5\nI0516 14:20:22.958871 3116 log.go:172] (0xc000a2e420) Reply frame received for 5\nI0516 14:20:23.023358 3116 log.go:172] (0xc000a2e420) Data frame received for 3\nI0516 14:20:23.023389 3116 log.go:172] (0xc000a06000) (3) Data frame handling\nI0516 14:20:23.023422 3116 log.go:172] (0xc000a2e420) Data frame received for 5\nI0516 14:20:23.023448 3116 log.go:172] (0xc00082c6e0) (5) Data frame handling\nI0516 14:20:23.023455 3116 log.go:172] (0xc00082c6e0) (5) Data frame sent\nI0516 14:20:23.023461 3116 log.go:172] (0xc000a2e420) Data frame received for 5\nI0516 14:20:23.023465 3116 log.go:172] (0xc00082c6e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0516 14:20:23.023488 3116 log.go:172] (0xc000a06000) (3) Data frame sent\nI0516 14:20:23.023497 3116 log.go:172] (0xc000a2e420) Data frame received for 3\nI0516 14:20:23.023503 3116 log.go:172] (0xc000a06000) (3) Data frame handling\nI0516 14:20:23.024788 3116 log.go:172] (0xc000a2e420) Data frame received for 1\nI0516 14:20:23.024804 3116 log.go:172] (0xc00082c640) (1) Data frame handling\nI0516 14:20:23.024814 3116 log.go:172] (0xc00082c640) (1) Data frame sent\nI0516 14:20:23.024823 3116 log.go:172] (0xc000a2e420) (0xc00082c640) Stream removed, broadcasting: 1\nI0516 14:20:23.024833 3116 log.go:172] (0xc000a2e420) Go away received\nI0516 14:20:23.025095 3116 log.go:172] (0xc000a2e420) (0xc00082c640) Stream removed, broadcasting: 1\nI0516 14:20:23.025294 3116 log.go:172] (0xc000a2e420) (0xc000a06000) Stream removed, broadcasting: 3\nI0516 14:20:23.025311 3116 log.go:172] (0xc000a2e420) (0xc00082c6e0) Stream removed, broadcasting: 5\n" May 16 14:20:23.029: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 16 14:20:23.029: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 16 14:20:23.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3049 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 16 14:20:23.236: INFO: stderr: "I0516 14:20:23.157590 3136 log.go:172] (0xc00095c370) (0xc0001d6820) Create stream\nI0516 14:20:23.157668 3136 log.go:172] (0xc00095c370) (0xc0001d6820) Stream added, broadcasting: 1\nI0516 14:20:23.159964 3136 log.go:172] (0xc00095c370) Reply frame received for 1\nI0516 14:20:23.160018 3136 log.go:172] (0xc00095c370) (0xc0001d68c0) Create stream\nI0516 14:20:23.160030 3136 log.go:172] (0xc00095c370) (0xc0001d68c0) Stream added, broadcasting: 3\nI0516 14:20:23.161102 3136 log.go:172] (0xc00095c370) Reply frame received for 3\nI0516 14:20:23.161284 3136 log.go:172] (0xc00095c370) (0xc0008b8000) Create stream\nI0516 14:20:23.161303 3136 log.go:172] (0xc00095c370) (0xc0008b8000) Stream added, broadcasting: 5\nI0516 14:20:23.162437 3136 log.go:172] (0xc00095c370) Reply frame received for 5\nI0516 14:20:23.231571 3136 log.go:172] (0xc00095c370) Data frame received for 5\nI0516 14:20:23.231606 3136 log.go:172] (0xc0008b8000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0516 14:20:23.231631 3136 log.go:172] (0xc00095c370) Data frame received for 3\nI0516 14:20:23.231677 3136 log.go:172] (0xc0001d68c0) (3) Data frame handling\nI0516 14:20:23.231696 3136 log.go:172] (0xc0001d68c0) (3) Data frame sent\nI0516 14:20:23.231706 3136 log.go:172] (0xc00095c370) Data frame received for 3\nI0516 14:20:23.231713 3136 log.go:172] (0xc0001d68c0) (3) Data frame handling\nI0516 14:20:23.231753 3136 log.go:172] (0xc0008b8000) (5) Data frame sent\nI0516 14:20:23.231773 3136 log.go:172] (0xc00095c370) Data frame received for 5\nI0516 14:20:23.231784 3136 log.go:172] (0xc0008b8000) (5) Data frame handling\nI0516 14:20:23.232822 3136 log.go:172] (0xc00095c370) Data frame received for 1\nI0516 14:20:23.232835 3136 log.go:172] (0xc0001d6820) (1) Data frame handling\nI0516 14:20:23.232841 3136 log.go:172] (0xc0001d6820) (1) Data frame sent\nI0516 14:20:23.232851 3136 log.go:172] (0xc00095c370) (0xc0001d6820) Stream removed, broadcasting: 1\nI0516 14:20:23.232895 3136 log.go:172] (0xc00095c370) Go away received\nI0516 14:20:23.233103 3136 log.go:172] (0xc00095c370) (0xc0001d6820) Stream removed, broadcasting: 1\nI0516 14:20:23.233250 3136 log.go:172] (0xc00095c370) (0xc0001d68c0) Stream removed, broadcasting: 3\nI0516 14:20:23.233269 3136 log.go:172] (0xc00095c370) (0xc0008b8000) Stream removed, broadcasting: 5\n" May 16 14:20:23.236: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 16 14:20:23.236: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 16 14:20:23.236: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 16 14:20:43.252: INFO: Deleting all statefulset in ns statefulset-3049 May 16 14:20:43.256: INFO: Scaling statefulset ss to 0 May 16 14:20:43.264: INFO: Waiting for statefulset status.replicas updated to 0 May 16 14:20:43.266: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:20:43.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3049" for this suite. May 16 14:20:49.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:20:49.383: INFO: namespace statefulset-3049 deletion completed in 6.103052838s • [SLOW TEST:88.449 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:20:49.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 16 14:20:49.469: INFO: Waiting up to 5m0s for pod "downward-api-ff89752a-9246-4b1c-9b9c-12a98aa36d6d" in namespace "downward-api-6518" to be "success or failure" May 16 14:20:49.473: INFO: Pod "downward-api-ff89752a-9246-4b1c-9b9c-12a98aa36d6d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.860482ms May 16 14:20:51.477: INFO: Pod "downward-api-ff89752a-9246-4b1c-9b9c-12a98aa36d6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007753161s May 16 14:20:53.481: INFO: Pod "downward-api-ff89752a-9246-4b1c-9b9c-12a98aa36d6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012076892s STEP: Saw pod success May 16 14:20:53.481: INFO: Pod "downward-api-ff89752a-9246-4b1c-9b9c-12a98aa36d6d" satisfied condition "success or failure" May 16 14:20:53.484: INFO: Trying to get logs from node iruya-worker2 pod downward-api-ff89752a-9246-4b1c-9b9c-12a98aa36d6d container dapi-container: STEP: delete the pod May 16 14:20:53.524: INFO: Waiting for pod downward-api-ff89752a-9246-4b1c-9b9c-12a98aa36d6d to disappear May 16 14:20:53.537: INFO: Pod downward-api-ff89752a-9246-4b1c-9b9c-12a98aa36d6d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:20:53.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6518" for this suite. May 16 14:20:59.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:20:59.639: INFO: namespace downward-api-6518 deletion completed in 6.098237618s • [SLOW TEST:10.255 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:20:59.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:21:03.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1269" for this suite. May 16 14:21:09.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:21:09.856: INFO: namespace kubelet-test-1269 deletion completed in 6.092275211s • [SLOW TEST:10.217 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:21:09.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:21:09.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3787" for this suite. May 16 14:21:32.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:21:32.087: INFO: namespace pods-3787 deletion completed in 22.088154832s • [SLOW TEST:22.231 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:21:32.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 14:21:32.145: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 16 14:21:32.162: INFO: Pod name sample-pod: Found 0 pods out of 1 May 16 14:21:37.172: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 16 14:21:37.172: INFO: Creating deployment "test-rolling-update-deployment" May 16 14:21:37.176: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 16 14:21:37.183: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 16 14:21:39.190: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 16 14:21:39.192: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235697, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235697, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235697, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235697, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 14:21:41.196: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 16 14:21:41.205: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-2767,SelfLink:/apis/apps/v1/namespaces/deployment-2767/deployments/test-rolling-update-deployment,UID:30007766-6a59-4947-b129-5de28e74a850,ResourceVersion:11229684,Generation:1,CreationTimestamp:2020-05-16 14:21:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-16 14:21:37 +0000 UTC 2020-05-16 14:21:37 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-16 14:21:40 +0000 UTC 2020-05-16 14:21:37 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 16 14:21:41.208: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-2767,SelfLink:/apis/apps/v1/namespaces/deployment-2767/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:0aad29de-a831-4adf-97ef-f1bd4c83d8ea,ResourceVersion:11229672,Generation:1,CreationTimestamp:2020-05-16 14:21:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 30007766-6a59-4947-b129-5de28e74a850 0xc00239c907 0xc00239c908}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 16 14:21:41.208: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 16 14:21:41.208: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-2767,SelfLink:/apis/apps/v1/namespaces/deployment-2767/replicasets/test-rolling-update-controller,UID:62d17dd1-6f3b-498a-80bb-71b5147cdfe3,ResourceVersion:11229682,Generation:2,CreationTimestamp:2020-05-16 14:21:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 30007766-6a59-4947-b129-5de28e74a850 0xc00239c837 0xc00239c838}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 16 14:21:41.211: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-gpvqc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-gpvqc,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-2767,SelfLink:/api/v1/namespaces/deployment-2767/pods/test-rolling-update-deployment-79f6b9d75c-gpvqc,UID:be2b8f96-9284-45cf-9bce-9406a3e7c8b6,ResourceVersion:11229671,Generation:0,CreationTimestamp:2020-05-16 14:21:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 0aad29de-a831-4adf-97ef-f1bd4c83d8ea 0xc002ee8e57 0xc002ee8e58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xvhq7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xvhq7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-xvhq7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ee8ed0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ee8ef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 14:21:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 14:21:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 14:21:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 14:21:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.88,StartTime:2020-05-16 14:21:37 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-16 14:21:40 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://372a593006ddf6806f02de74390a4da7f086c2b622e02e4b2a546f97677efbc9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:21:41.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2767" for this suite. May 16 14:21:47.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:21:47.443: INFO: namespace deployment-2767 deletion completed in 6.227390761s • [SLOW TEST:15.355 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:21:47.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 14:21:47.560: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.190315ms) May 16 14:21:47.563: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.620546ms) May 16 14:21:47.566: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.342055ms) May 16 14:21:47.569: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.915852ms) May 16 14:21:47.572: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.790677ms) May 16 14:21:47.575: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.737087ms) May 16 14:21:47.578: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.924223ms) May 16 14:21:47.580: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.713703ms) May 16 14:21:47.583: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.402723ms) May 16 14:21:47.585: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.516769ms) May 16 14:21:47.588: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.806726ms) May 16 14:21:47.591: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.925773ms) May 16 14:21:47.594: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.990468ms) May 16 14:21:47.597: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.942047ms) May 16 14:21:47.600: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.918225ms) May 16 14:21:47.603: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.302887ms) May 16 14:21:47.607: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.393989ms) May 16 14:21:47.610: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.415826ms) May 16 14:21:47.614: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.260632ms) May 16 14:21:47.617: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.394037ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:21:47.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8033" for this suite. May 16 14:21:53.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:21:53.740: INFO: namespace proxy-8033 deletion completed in 6.11942786s • [SLOW TEST:6.297 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:21:53.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-ac2c649b-242e-41a3-9b7d-4f66a74a2a5a STEP: Creating a pod to test consume configMaps May 16 14:21:53.799: INFO: Waiting up to 5m0s for pod "pod-configmaps-cfd3af9c-d45e-406c-8866-273381a7e9ea" in namespace "configmap-7836" to be "success or failure" May 16 14:21:53.819: INFO: Pod "pod-configmaps-cfd3af9c-d45e-406c-8866-273381a7e9ea": Phase="Pending", Reason="", readiness=false. Elapsed: 19.476345ms May 16 14:21:55.948: INFO: Pod "pod-configmaps-cfd3af9c-d45e-406c-8866-273381a7e9ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148626583s May 16 14:21:57.953: INFO: Pod "pod-configmaps-cfd3af9c-d45e-406c-8866-273381a7e9ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.153360361s STEP: Saw pod success May 16 14:21:57.953: INFO: Pod "pod-configmaps-cfd3af9c-d45e-406c-8866-273381a7e9ea" satisfied condition "success or failure" May 16 14:21:57.955: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-cfd3af9c-d45e-406c-8866-273381a7e9ea container configmap-volume-test: STEP: delete the pod May 16 14:21:58.099: INFO: Waiting for pod pod-configmaps-cfd3af9c-d45e-406c-8866-273381a7e9ea to disappear May 16 14:21:58.157: INFO: Pod pod-configmaps-cfd3af9c-d45e-406c-8866-273381a7e9ea no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:21:58.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7836" for this suite. May 16 14:22:04.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:22:04.320: INFO: namespace configmap-7836 deletion completed in 6.159142834s • [SLOW TEST:10.579 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:22:04.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server May 16 14:22:04.350: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:22:04.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8284" for this suite. May 16 14:22:10.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:22:10.548: INFO: namespace kubectl-8284 deletion completed in 6.104369697s • [SLOW TEST:6.227 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:22:10.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 16 14:22:10.626: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cfa46808-52fa-41c0-af56-c473d9d226a7" in namespace "projected-6478" to be "success or failure" May 16 14:22:10.631: INFO: Pod "downwardapi-volume-cfa46808-52fa-41c0-af56-c473d9d226a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.772443ms May 16 14:22:12.635: INFO: Pod "downwardapi-volume-cfa46808-52fa-41c0-af56-c473d9d226a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009101608s May 16 14:22:14.639: INFO: Pod "downwardapi-volume-cfa46808-52fa-41c0-af56-c473d9d226a7": Phase="Running", Reason="", readiness=true. Elapsed: 4.013249713s May 16 14:22:16.643: INFO: Pod "downwardapi-volume-cfa46808-52fa-41c0-af56-c473d9d226a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017624982s STEP: Saw pod success May 16 14:22:16.644: INFO: Pod "downwardapi-volume-cfa46808-52fa-41c0-af56-c473d9d226a7" satisfied condition "success or failure" May 16 14:22:16.647: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-cfa46808-52fa-41c0-af56-c473d9d226a7 container client-container: STEP: delete the pod May 16 14:22:16.707: INFO: Waiting for pod downwardapi-volume-cfa46808-52fa-41c0-af56-c473d9d226a7 to disappear May 16 14:22:16.712: INFO: Pod downwardapi-volume-cfa46808-52fa-41c0-af56-c473d9d226a7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:22:16.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6478" for this suite. May 16 14:22:22.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:22:22.803: INFO: namespace projected-6478 deletion completed in 6.087801882s • [SLOW TEST:12.255 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:22:22.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 16 14:22:27.404: INFO: Successfully updated pod "pod-update-activedeadlineseconds-80366ef2-5ff0-4b32-aba3-517020be56f0" May 16 14:22:27.404: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-80366ef2-5ff0-4b32-aba3-517020be56f0" in namespace "pods-8370" to be "terminated due to deadline exceeded" May 16 14:22:27.420: INFO: Pod "pod-update-activedeadlineseconds-80366ef2-5ff0-4b32-aba3-517020be56f0": Phase="Running", Reason="", readiness=true. Elapsed: 16.28834ms May 16 14:22:29.425: INFO: Pod "pod-update-activedeadlineseconds-80366ef2-5ff0-4b32-aba3-517020be56f0": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.021233476s May 16 14:22:29.425: INFO: Pod "pod-update-activedeadlineseconds-80366ef2-5ff0-4b32-aba3-517020be56f0" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:22:29.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8370" for this suite. May 16 14:22:35.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:22:35.566: INFO: namespace pods-8370 deletion completed in 6.136969252s • [SLOW TEST:12.761 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:22:35.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 16 14:22:35.657: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8616cfaa-f88b-4b30-a8da-507fe263c524" in namespace "downward-api-708" to be "success or failure" May 16 14:22:35.679: INFO: Pod "downwardapi-volume-8616cfaa-f88b-4b30-a8da-507fe263c524": Phase="Pending", Reason="", readiness=false. Elapsed: 21.702015ms May 16 14:22:37.683: INFO: Pod "downwardapi-volume-8616cfaa-f88b-4b30-a8da-507fe263c524": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026069753s May 16 14:22:39.687: INFO: Pod "downwardapi-volume-8616cfaa-f88b-4b30-a8da-507fe263c524": Phase="Running", Reason="", readiness=true. Elapsed: 4.029811792s May 16 14:22:41.691: INFO: Pod "downwardapi-volume-8616cfaa-f88b-4b30-a8da-507fe263c524": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034171672s STEP: Saw pod success May 16 14:22:41.691: INFO: Pod "downwardapi-volume-8616cfaa-f88b-4b30-a8da-507fe263c524" satisfied condition "success or failure" May 16 14:22:41.695: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8616cfaa-f88b-4b30-a8da-507fe263c524 container client-container: STEP: delete the pod May 16 14:22:41.765: INFO: Waiting for pod downwardapi-volume-8616cfaa-f88b-4b30-a8da-507fe263c524 to disappear May 16 14:22:41.770: INFO: Pod downwardapi-volume-8616cfaa-f88b-4b30-a8da-507fe263c524 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:22:41.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-708" for this suite. May 16 14:22:47.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:22:47.856: INFO: namespace downward-api-708 deletion completed in 6.083412726s • [SLOW TEST:12.290 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:22:47.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-a63f10fe-7071-47f3-aaf5-39a3a3c719fb in namespace container-probe-6776 May 16 14:22:52.013: INFO: Started pod liveness-a63f10fe-7071-47f3-aaf5-39a3a3c719fb in namespace container-probe-6776 STEP: checking the pod's current state and verifying that restartCount is present May 16 14:22:52.021: INFO: Initial restart count of pod liveness-a63f10fe-7071-47f3-aaf5-39a3a3c719fb is 0 May 16 14:23:10.300: INFO: Restart count of pod container-probe-6776/liveness-a63f10fe-7071-47f3-aaf5-39a3a3c719fb is now 1 (18.279680451s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:23:10.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6776" for this suite. May 16 14:23:16.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:23:16.435: INFO: namespace container-probe-6776 deletion completed in 6.109603815s • [SLOW TEST:28.578 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:23:16.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-tl8b STEP: Creating a pod to test atomic-volume-subpath May 16 14:23:16.541: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-tl8b" in namespace "subpath-9289" to be "success or failure" May 16 14:23:16.546: INFO: Pod "pod-subpath-test-downwardapi-tl8b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.107195ms May 16 14:23:18.550: INFO: Pod "pod-subpath-test-downwardapi-tl8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009223435s May 16 14:23:20.554: INFO: Pod "pod-subpath-test-downwardapi-tl8b": Phase="Running", Reason="", readiness=true. Elapsed: 4.013508898s May 16 14:23:22.559: INFO: Pod "pod-subpath-test-downwardapi-tl8b": Phase="Running", Reason="", readiness=true. Elapsed: 6.018264671s May 16 14:23:24.563: INFO: Pod "pod-subpath-test-downwardapi-tl8b": Phase="Running", Reason="", readiness=true. Elapsed: 8.02261697s May 16 14:23:26.596: INFO: Pod "pod-subpath-test-downwardapi-tl8b": Phase="Running", Reason="", readiness=true. Elapsed: 10.05534468s May 16 14:23:28.601: INFO: Pod "pod-subpath-test-downwardapi-tl8b": Phase="Running", Reason="", readiness=true. Elapsed: 12.059838043s May 16 14:23:30.605: INFO: Pod "pod-subpath-test-downwardapi-tl8b": Phase="Running", Reason="", readiness=true. Elapsed: 14.063840863s May 16 14:23:32.609: INFO: Pod "pod-subpath-test-downwardapi-tl8b": Phase="Running", Reason="", readiness=true. Elapsed: 16.068530426s May 16 14:23:34.614: INFO: Pod "pod-subpath-test-downwardapi-tl8b": Phase="Running", Reason="", readiness=true. Elapsed: 18.073020135s May 16 14:23:36.618: INFO: Pod "pod-subpath-test-downwardapi-tl8b": Phase="Running", Reason="", readiness=true. Elapsed: 20.077609065s May 16 14:23:38.623: INFO: Pod "pod-subpath-test-downwardapi-tl8b": Phase="Running", Reason="", readiness=true. Elapsed: 22.082428221s May 16 14:23:40.628: INFO: Pod "pod-subpath-test-downwardapi-tl8b": Phase="Running", Reason="", readiness=true. Elapsed: 24.086842103s May 16 14:23:42.632: INFO: Pod "pod-subpath-test-downwardapi-tl8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.091793612s STEP: Saw pod success May 16 14:23:42.633: INFO: Pod "pod-subpath-test-downwardapi-tl8b" satisfied condition "success or failure" May 16 14:23:42.636: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-tl8b container test-container-subpath-downwardapi-tl8b: STEP: delete the pod May 16 14:23:42.658: INFO: Waiting for pod pod-subpath-test-downwardapi-tl8b to disappear May 16 14:23:42.668: INFO: Pod pod-subpath-test-downwardapi-tl8b no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-tl8b May 16 14:23:42.668: INFO: Deleting pod "pod-subpath-test-downwardapi-tl8b" in namespace "subpath-9289" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:23:42.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9289" for this suite. May 16 14:23:48.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:23:48.776: INFO: namespace subpath-9289 deletion completed in 6.101925528s • [SLOW TEST:32.341 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:23:48.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy May 16 14:23:48.834: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix280331414/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:23:48.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2354" for this suite. May 16 14:23:54.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:23:54.982: INFO: namespace kubectl-2354 deletion completed in 6.076732382s • [SLOW TEST:6.206 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:23:54.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 14:23:55.050: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 16 14:23:57.203: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:23:58.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1327" for this suite. May 16 14:24:04.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:24:04.615: INFO: namespace replication-controller-1327 deletion completed in 6.377725659s • [SLOW TEST:9.633 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:24:04.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 14:24:04.669: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:24:05.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8529" for this suite. May 16 14:24:11.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:24:11.897: INFO: namespace custom-resource-definition-8529 deletion completed in 6.088361021s • [SLOW TEST:7.282 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:24:11.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 16 14:24:11.944: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:24:18.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8517" for this suite. May 16 14:24:24.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:24:24.129: INFO: namespace init-container-8517 deletion completed in 6.09267987s • [SLOW TEST:12.232 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:24:24.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0516 14:24:27.390468 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 16 14:24:27.390: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:24:27.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8315" for this suite. May 16 14:24:33.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:24:33.567: INFO: namespace gc-8315 deletion completed in 6.174670995s • [SLOW TEST:9.438 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:24:33.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6202.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6202.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6202.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6202.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6202.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6202.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 16 14:24:39.711: INFO: DNS probes using dns-6202/dns-test-9e0453b2-5cf4-4003-91de-ff5915778e84 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:24:39.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6202" for this suite. May 16 14:24:45.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:24:45.861: INFO: namespace dns-6202 deletion completed in 6.099326627s • [SLOW TEST:12.294 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:24:45.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-04a88a67-ebe8-4ed7-85bc-7b0e5f6d6b3e STEP: Creating a pod to test consume configMaps May 16 14:24:45.998: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7994a975-e0a5-43ba-aa9f-c3861d332a95" in namespace "projected-3379" to be "success or failure" May 16 14:24:46.001: INFO: Pod "pod-projected-configmaps-7994a975-e0a5-43ba-aa9f-c3861d332a95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.67817ms May 16 14:24:48.005: INFO: Pod "pod-projected-configmaps-7994a975-e0a5-43ba-aa9f-c3861d332a95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006866883s May 16 14:24:50.010: INFO: Pod "pod-projected-configmaps-7994a975-e0a5-43ba-aa9f-c3861d332a95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011448265s STEP: Saw pod success May 16 14:24:50.010: INFO: Pod "pod-projected-configmaps-7994a975-e0a5-43ba-aa9f-c3861d332a95" satisfied condition "success or failure" May 16 14:24:50.013: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-7994a975-e0a5-43ba-aa9f-c3861d332a95 container projected-configmap-volume-test: STEP: delete the pod May 16 14:24:50.047: INFO: Waiting for pod pod-projected-configmaps-7994a975-e0a5-43ba-aa9f-c3861d332a95 to disappear May 16 14:24:50.054: INFO: Pod pod-projected-configmaps-7994a975-e0a5-43ba-aa9f-c3861d332a95 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:24:50.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3379" for this suite. May 16 14:24:56.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:24:56.148: INFO: namespace projected-3379 deletion completed in 6.091226243s • [SLOW TEST:10.287 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:24:56.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 16 14:25:00.239: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-f7510675-ea3f-459b-92f0-c671d4624632,GenerateName:,Namespace:events-255,SelfLink:/api/v1/namespaces/events-255/pods/send-events-f7510675-ea3f-459b-92f0-c671d4624632,UID:b8c292a2-683e-4a52-a13d-ce30d675cb1e,ResourceVersion:11230518,Generation:0,CreationTimestamp:2020-05-16 14:24:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 220036890,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bwjw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bwjw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-bwjw4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033fb020} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033fb040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 14:24:56 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 14:25:00 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 14:25:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 14:24:56 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.96,StartTime:2020-05-16 14:24:56 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-16 14:24:59 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://14a6cd1ddaac795a2fd97945b7e4ddc7a2acacb8d08cb895e12533a06335e01f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 16 14:25:02.244: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 16 14:25:04.249: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:25:04.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-255" for this suite. May 16 14:25:42.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:25:42.370: INFO: namespace events-255 deletion completed in 38.10829334s • [SLOW TEST:46.221 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:25:42.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 14:25:42.468: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 16 14:25:47.472: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 16 14:25:47.472: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 16 14:25:49.476: INFO: Creating deployment "test-rollover-deployment" May 16 14:25:49.498: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 16 14:25:51.505: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 16 14:25:51.511: INFO: Ensure that both replica sets have 1 created replica May 16 14:25:51.516: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 16 14:25:51.522: INFO: Updating deployment test-rollover-deployment May 16 14:25:51.522: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 16 14:25:53.536: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 16 14:25:53.544: INFO: Make sure deployment "test-rollover-deployment" is complete May 16 14:25:53.550: INFO: all replica sets need to contain the pod-template-hash label May 16 14:25:53.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235949, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235949, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235951, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235949, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 14:25:55.561: INFO: all replica sets need to contain the pod-template-hash label May 16 14:25:55.561: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235949, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235949, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235955, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235949, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 14:25:57.556: INFO: all replica sets need to contain the pod-template-hash label May 16 14:25:57.556: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235949, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235949, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235955, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235949, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 14:25:59.559: INFO: all replica sets need to contain the pod-template-hash label May 16 14:25:59.559: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235949, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235949, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235955, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235949, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 14:26:01.558: INFO: all replica sets need to contain the pod-template-hash label May 16 14:26:01.558: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235949, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235949, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235955, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235949, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 14:26:03.558: INFO: all replica sets need to contain the pod-template-hash label May 16 14:26:03.558: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235949, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235949, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235955, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725235949, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 14:26:05.897: INFO: May 16 14:26:05.897: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 16 14:26:05.906: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-6473,SelfLink:/apis/apps/v1/namespaces/deployment-6473/deployments/test-rollover-deployment,UID:06f6901f-81f8-4c79-9149-d428d42bbdc2,ResourceVersion:11230735,Generation:2,CreationTimestamp:2020-05-16 14:25:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-16 14:25:49 +0000 UTC 2020-05-16 14:25:49 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-16 14:26:05 +0000 UTC 2020-05-16 14:25:49 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 16 14:26:05.910: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-6473,SelfLink:/apis/apps/v1/namespaces/deployment-6473/replicasets/test-rollover-deployment-854595fc44,UID:01f9f5bb-6a71-4e9b-a2aa-1c94a4f75a6b,ResourceVersion:11230724,Generation:2,CreationTimestamp:2020-05-16 14:25:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 06f6901f-81f8-4c79-9149-d428d42bbdc2 0xc0022f7cd7 0xc0022f7cd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 16 14:26:05.910: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 16 14:26:05.910: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-6473,SelfLink:/apis/apps/v1/namespaces/deployment-6473/replicasets/test-rollover-controller,UID:ea477a41-b486-4a2c-9661-c13eff216d08,ResourceVersion:11230733,Generation:2,CreationTimestamp:2020-05-16 14:25:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 06f6901f-81f8-4c79-9149-d428d42bbdc2 0xc0022f7c07 0xc0022f7c08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 16 14:26:05.910: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-6473,SelfLink:/apis/apps/v1/namespaces/deployment-6473/replicasets/test-rollover-deployment-9b8b997cf,UID:2889004b-9c96-43af-a084-318f2bae1649,ResourceVersion:11230688,Generation:2,CreationTimestamp:2020-05-16 14:25:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 06f6901f-81f8-4c79-9149-d428d42bbdc2 0xc0022f7da0 0xc0022f7da1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 16 14:26:05.913: INFO: Pod "test-rollover-deployment-854595fc44-xjnzf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-xjnzf,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-6473,SelfLink:/api/v1/namespaces/deployment-6473/pods/test-rollover-deployment-854595fc44-xjnzf,UID:622077e8-3389-429f-9d41-0f94ab42d5c0,ResourceVersion:11230702,Generation:0,CreationTimestamp:2020-05-16 14:25:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 01f9f5bb-6a71-4e9b-a2aa-1c94a4f75a6b 0xc001621157 0xc001621158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnbcp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnbcp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-vnbcp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016211d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016211f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 14:25:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 14:25:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 14:25:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 14:25:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.254,StartTime:2020-05-16 14:25:51 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-16 14:25:54 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://1cbf776ff9aa22dedf5217ff7c4d9f10e5d82248942666d21af0013e97d87e8c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:26:05.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6473" for this suite. May 16 14:26:11.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:26:12.237: INFO: namespace deployment-6473 deletion completed in 6.320364703s • [SLOW TEST:29.867 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:26:12.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 16 14:26:12.350: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:12.368: INFO: Number of nodes with available pods: 0 May 16 14:26:12.368: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:13.375: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:13.378: INFO: Number of nodes with available pods: 0 May 16 14:26:13.378: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:14.486: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:14.489: INFO: Number of nodes with available pods: 0 May 16 14:26:14.489: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:15.439: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:15.443: INFO: Number of nodes with available pods: 1 May 16 14:26:15.443: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:16.372: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:16.376: INFO: Number of nodes with available pods: 2 May 16 14:26:16.376: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 16 14:26:16.399: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:16.402: INFO: Number of nodes with available pods: 1 May 16 14:26:16.402: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:17.406: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:17.409: INFO: Number of nodes with available pods: 1 May 16 14:26:17.409: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:18.406: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:18.410: INFO: Number of nodes with available pods: 1 May 16 14:26:18.410: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:19.407: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:19.409: INFO: Number of nodes with available pods: 1 May 16 14:26:19.409: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:20.406: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:20.409: INFO: Number of nodes with available pods: 1 May 16 14:26:20.409: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:21.406: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:21.409: INFO: Number of nodes with available pods: 1 May 16 14:26:21.409: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:22.406: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:22.410: INFO: Number of nodes with available pods: 1 May 16 14:26:22.410: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:23.406: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:23.409: INFO: Number of nodes with available pods: 1 May 16 14:26:23.409: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:24.406: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:24.410: INFO: Number of nodes with available pods: 1 May 16 14:26:24.410: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:25.406: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:25.410: INFO: Number of nodes with available pods: 1 May 16 14:26:25.410: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:26.406: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:26.425: INFO: Number of nodes with available pods: 1 May 16 14:26:26.425: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:27.406: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:27.410: INFO: Number of nodes with available pods: 1 May 16 14:26:27.410: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:28.412: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:28.415: INFO: Number of nodes with available pods: 1 May 16 14:26:28.415: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:29.407: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:29.411: INFO: Number of nodes with available pods: 1 May 16 14:26:29.411: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:30.406: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:30.410: INFO: Number of nodes with available pods: 1 May 16 14:26:30.410: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:31.407: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:31.410: INFO: Number of nodes with available pods: 1 May 16 14:26:31.410: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:32.407: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:32.410: INFO: Number of nodes with available pods: 1 May 16 14:26:32.410: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:33.406: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:33.409: INFO: Number of nodes with available pods: 1 May 16 14:26:33.409: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:34.407: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:34.409: INFO: Number of nodes with available pods: 1 May 16 14:26:34.410: INFO: Node iruya-worker is running more than one daemon pod May 16 14:26:35.405: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 16 14:26:35.408: INFO: Number of nodes with available pods: 2 May 16 14:26:35.408: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3350, will wait for the garbage collector to delete the pods May 16 14:26:35.468: INFO: Deleting DaemonSet.extensions daemon-set took: 5.30862ms May 16 14:26:35.768: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.246237ms May 16 14:26:40.372: INFO: Number of nodes with available pods: 0 May 16 14:26:40.372: INFO: Number of running nodes: 0, number of available pods: 0 May 16 14:26:40.380: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3350/daemonsets","resourceVersion":"11230897"},"items":null} May 16 14:26:40.382: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3350/pods","resourceVersion":"11230897"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:26:40.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3350" for this suite. May 16 14:26:46.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:26:46.518: INFO: namespace daemonsets-3350 deletion completed in 6.12551718s • [SLOW TEST:34.280 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:26:46.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-2773/secret-test-fa1710e6-257d-4eb7-9f5a-ed1251ecbd77 STEP: Creating a pod to test consume secrets May 16 14:26:46.597: INFO: Waiting up to 5m0s for pod "pod-configmaps-6dad84c7-3d65-4fac-9b6e-c4b6230e4ac5" in namespace "secrets-2773" to be "success or failure" May 16 14:26:46.615: INFO: Pod "pod-configmaps-6dad84c7-3d65-4fac-9b6e-c4b6230e4ac5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.37751ms May 16 14:26:48.704: INFO: Pod "pod-configmaps-6dad84c7-3d65-4fac-9b6e-c4b6230e4ac5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106838959s May 16 14:26:50.728: INFO: Pod "pod-configmaps-6dad84c7-3d65-4fac-9b6e-c4b6230e4ac5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.130860496s STEP: Saw pod success May 16 14:26:50.728: INFO: Pod "pod-configmaps-6dad84c7-3d65-4fac-9b6e-c4b6230e4ac5" satisfied condition "success or failure" May 16 14:26:50.731: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-6dad84c7-3d65-4fac-9b6e-c4b6230e4ac5 container env-test: STEP: delete the pod May 16 14:26:50.808: INFO: Waiting for pod pod-configmaps-6dad84c7-3d65-4fac-9b6e-c4b6230e4ac5 to disappear May 16 14:26:50.811: INFO: Pod pod-configmaps-6dad84c7-3d65-4fac-9b6e-c4b6230e4ac5 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:26:50.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2773" for this suite. May 16 14:26:56.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:26:56.907: INFO: namespace secrets-2773 deletion completed in 6.093757922s • [SLOW TEST:10.389 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:26:56.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 16 14:26:56.945: INFO: Waiting up to 5m0s for pod "pod-1c0bff3c-4d77-458b-b185-4229eb555ffc" in namespace "emptydir-8070" to be "success or failure" May 16 14:26:56.966: INFO: Pod "pod-1c0bff3c-4d77-458b-b185-4229eb555ffc": Phase="Pending", Reason="", readiness=false. Elapsed: 21.07309ms May 16 14:26:58.971: INFO: Pod "pod-1c0bff3c-4d77-458b-b185-4229eb555ffc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025323765s May 16 14:27:00.975: INFO: Pod "pod-1c0bff3c-4d77-458b-b185-4229eb555ffc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029822358s STEP: Saw pod success May 16 14:27:00.975: INFO: Pod "pod-1c0bff3c-4d77-458b-b185-4229eb555ffc" satisfied condition "success or failure" May 16 14:27:00.978: INFO: Trying to get logs from node iruya-worker pod pod-1c0bff3c-4d77-458b-b185-4229eb555ffc container test-container: STEP: delete the pod May 16 14:27:01.020: INFO: Waiting for pod pod-1c0bff3c-4d77-458b-b185-4229eb555ffc to disappear May 16 14:27:01.050: INFO: Pod pod-1c0bff3c-4d77-458b-b185-4229eb555ffc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:27:01.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8070" for this suite. May 16 14:27:07.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:27:07.171: INFO: namespace emptydir-8070 deletion completed in 6.117189455s • [SLOW TEST:10.264 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:27:07.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 16 14:27:07.211: INFO: Waiting up to 5m0s for pod "downwardapi-volume-588a9d1a-3b35-46a2-bc75-334822844499" in namespace "downward-api-2549" to be "success or failure" May 16 14:27:07.264: INFO: Pod "downwardapi-volume-588a9d1a-3b35-46a2-bc75-334822844499": Phase="Pending", Reason="", readiness=false. Elapsed: 53.4563ms May 16 14:27:09.270: INFO: Pod "downwardapi-volume-588a9d1a-3b35-46a2-bc75-334822844499": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058732506s May 16 14:27:11.274: INFO: Pod "downwardapi-volume-588a9d1a-3b35-46a2-bc75-334822844499": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063212148s STEP: Saw pod success May 16 14:27:11.274: INFO: Pod "downwardapi-volume-588a9d1a-3b35-46a2-bc75-334822844499" satisfied condition "success or failure" May 16 14:27:11.278: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-588a9d1a-3b35-46a2-bc75-334822844499 container client-container: STEP: delete the pod May 16 14:27:11.398: INFO: Waiting for pod downwardapi-volume-588a9d1a-3b35-46a2-bc75-334822844499 to disappear May 16 14:27:11.418: INFO: Pod downwardapi-volume-588a9d1a-3b35-46a2-bc75-334822844499 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:27:11.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2549" for this suite. May 16 14:27:17.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:27:17.509: INFO: namespace downward-api-2549 deletion completed in 6.08491217s • [SLOW TEST:10.337 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:27:17.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 16 14:27:17.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6936' May 16 14:27:20.837: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 16 14:27:20.837: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 16 14:27:20.848: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-ch46r] May 16 14:27:20.848: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-ch46r" in namespace "kubectl-6936" to be "running and ready" May 16 14:27:20.899: INFO: Pod "e2e-test-nginx-rc-ch46r": Phase="Pending", Reason="", readiness=false. Elapsed: 51.268457ms May 16 14:27:22.904: INFO: Pod "e2e-test-nginx-rc-ch46r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055658997s May 16 14:27:24.907: INFO: Pod "e2e-test-nginx-rc-ch46r": Phase="Running", Reason="", readiness=true. Elapsed: 4.059153581s May 16 14:27:24.907: INFO: Pod "e2e-test-nginx-rc-ch46r" satisfied condition "running and ready" May 16 14:27:24.907: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-ch46r] May 16 14:27:24.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-6936' May 16 14:27:25.021: INFO: stderr: "" May 16 14:27:25.021: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 May 16 14:27:25.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6936' May 16 14:27:25.128: INFO: stderr: "" May 16 14:27:25.128: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:27:25.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6936" for this suite. May 16 14:27:47.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:27:47.229: INFO: namespace kubectl-6936 deletion completed in 22.098358353s • [SLOW TEST:29.720 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:27:47.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 14:27:47.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4863' May 16 14:27:47.649: INFO: stderr: "" May 16 14:27:47.649: INFO: stdout: "replicationcontroller/redis-master created\n" May 16 14:27:47.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4863' May 16 14:27:47.983: INFO: stderr: "" May 16 14:27:47.983: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 16 14:27:48.995: INFO: Selector matched 1 pods for map[app:redis] May 16 14:27:48.995: INFO: Found 0 / 1 May 16 14:27:49.988: INFO: Selector matched 1 pods for map[app:redis] May 16 14:27:49.988: INFO: Found 0 / 1 May 16 14:27:50.988: INFO: Selector matched 1 pods for map[app:redis] May 16 14:27:50.988: INFO: Found 0 / 1 May 16 14:27:51.987: INFO: Selector matched 1 pods for map[app:redis] May 16 14:27:51.987: INFO: Found 1 / 1 May 16 14:27:51.987: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 16 14:27:51.990: INFO: Selector matched 1 pods for map[app:redis] May 16 14:27:51.990: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 16 14:27:51.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-cnz8g --namespace=kubectl-4863' May 16 14:27:52.110: INFO: stderr: "" May 16 14:27:52.110: INFO: stdout: "Name: redis-master-cnz8g\nNamespace: kubectl-4863\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Sat, 16 May 2020 14:27:47 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.6\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://6e55de9b23b5f81cf9ec3507c192e3efaf85f5a9dbb9653623fb8a520f81ec20\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 16 May 2020 14:27:50 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4n2h9 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-4n2h9:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4n2h9\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-4863/redis-master-cnz8g to iruya-worker2\n Normal Pulled 4s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker2 Created container redis-master\n Normal Started 2s kubelet, iruya-worker2 Started container redis-master\n" May 16 14:27:52.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-4863' May 16 14:27:52.238: INFO: stderr: "" May 16 14:27:52.238: INFO: stdout: "Name: redis-master\nNamespace: kubectl-4863\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-cnz8g\n" May 16 14:27:52.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-4863' May 16 14:27:52.345: INFO: stderr: "" May 16 14:27:52.345: INFO: stdout: "Name: redis-master\nNamespace: kubectl-4863\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.111.69.37\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.6:6379\nSession Affinity: None\nEvents: \n" May 16 14:27:52.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' May 16 14:27:52.467: INFO: stderr: "" May 16 14:27:52.467: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 16 May 2020 14:26:54 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 16 May 2020 14:26:54 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 16 May 2020 14:26:54 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 16 May 2020 14:26:54 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 61d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 61d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 61d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 61d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 61d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 61d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 61d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 16 14:27:52.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4863' May 16 14:27:52.574: INFO: stderr: "" May 16 14:27:52.574: INFO: stdout: "Name: kubectl-4863\nLabels: e2e-framework=kubectl\n e2e-run=0f830618-8eb8-4d0c-9f82-8cd8bc6973fb\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:27:52.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4863" for this suite. May 16 14:28:14.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:28:14.684: INFO: namespace kubectl-4863 deletion completed in 22.10699909s • [SLOW TEST:27.454 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:28:14.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0516 14:28:26.100745 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 16 14:28:26.100: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:28:26.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4266" for this suite. May 16 14:28:32.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:28:32.623: INFO: namespace gc-4266 deletion completed in 6.519618616s • [SLOW TEST:17.938 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:28:32.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 16 14:28:33.052: INFO: Waiting up to 5m0s for pod "pod-7d369d37-a68b-48f1-b9fa-063be67b96bd" in namespace "emptydir-6266" to be "success or failure" May 16 14:28:33.060: INFO: Pod "pod-7d369d37-a68b-48f1-b9fa-063be67b96bd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.962092ms May 16 14:28:35.064: INFO: Pod "pod-7d369d37-a68b-48f1-b9fa-063be67b96bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012659167s May 16 14:28:37.069: INFO: Pod "pod-7d369d37-a68b-48f1-b9fa-063be67b96bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017266529s STEP: Saw pod success May 16 14:28:37.069: INFO: Pod "pod-7d369d37-a68b-48f1-b9fa-063be67b96bd" satisfied condition "success or failure" May 16 14:28:37.072: INFO: Trying to get logs from node iruya-worker2 pod pod-7d369d37-a68b-48f1-b9fa-063be67b96bd container test-container: STEP: delete the pod May 16 14:28:37.122: INFO: Waiting for pod pod-7d369d37-a68b-48f1-b9fa-063be67b96bd to disappear May 16 14:28:37.151: INFO: Pod pod-7d369d37-a68b-48f1-b9fa-063be67b96bd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:28:37.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6266" for this suite. May 16 14:28:43.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:28:43.231: INFO: namespace emptydir-6266 deletion completed in 6.074481064s • [SLOW TEST:10.607 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:28:43.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:28:48.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-709" for this suite. May 16 14:29:10.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:29:10.433: INFO: namespace replication-controller-709 deletion completed in 22.094771095s • [SLOW TEST:27.203 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:29:10.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-a1463559-d95b-4b9a-a37c-04d98b2c2c75 STEP: Creating a pod to test consume secrets May 16 14:29:10.524: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2ffa8484-cfbe-43ee-8a3f-98aa675a4bf6" in namespace "projected-5359" to be "success or failure" May 16 14:29:10.540: INFO: Pod "pod-projected-secrets-2ffa8484-cfbe-43ee-8a3f-98aa675a4bf6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.925257ms May 16 14:29:12.546: INFO: Pod "pod-projected-secrets-2ffa8484-cfbe-43ee-8a3f-98aa675a4bf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021127283s May 16 14:29:14.550: INFO: Pod "pod-projected-secrets-2ffa8484-cfbe-43ee-8a3f-98aa675a4bf6": Phase="Running", Reason="", readiness=true. Elapsed: 4.025872437s May 16 14:29:16.555: INFO: Pod "pod-projected-secrets-2ffa8484-cfbe-43ee-8a3f-98aa675a4bf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030274282s STEP: Saw pod success May 16 14:29:16.555: INFO: Pod "pod-projected-secrets-2ffa8484-cfbe-43ee-8a3f-98aa675a4bf6" satisfied condition "success or failure" May 16 14:29:16.558: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-2ffa8484-cfbe-43ee-8a3f-98aa675a4bf6 container projected-secret-volume-test: STEP: delete the pod May 16 14:29:16.590: INFO: Waiting for pod pod-projected-secrets-2ffa8484-cfbe-43ee-8a3f-98aa675a4bf6 to disappear May 16 14:29:16.656: INFO: Pod pod-projected-secrets-2ffa8484-cfbe-43ee-8a3f-98aa675a4bf6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:29:16.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5359" for this suite. May 16 14:29:22.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:29:22.777: INFO: namespace projected-5359 deletion completed in 6.111172107s • [SLOW TEST:12.343 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:29:22.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:29:29.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9605" for this suite. May 16 14:29:35.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:29:35.309: INFO: namespace namespaces-9605 deletion completed in 6.093671886s STEP: Destroying namespace "nsdeletetest-2938" for this suite. May 16 14:29:35.311: INFO: Namespace nsdeletetest-2938 was already deleted STEP: Destroying namespace "nsdeletetest-2219" for this suite. May 16 14:29:41.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:29:41.418: INFO: namespace nsdeletetest-2219 deletion completed in 6.107092172s • [SLOW TEST:18.641 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:29:41.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-7fb7d202-b92e-4a68-8670-2b8637a8b0ab STEP: Creating a pod to test consume configMaps May 16 14:29:41.496: INFO: Waiting up to 5m0s for pod "pod-configmaps-9a504e2d-2823-4a8e-ad32-a510171143d2" in namespace "configmap-7398" to be "success or failure" May 16 14:29:41.499: INFO: Pod "pod-configmaps-9a504e2d-2823-4a8e-ad32-a510171143d2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.403632ms May 16 14:29:43.585: INFO: Pod "pod-configmaps-9a504e2d-2823-4a8e-ad32-a510171143d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088988873s May 16 14:29:45.590: INFO: Pod "pod-configmaps-9a504e2d-2823-4a8e-ad32-a510171143d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093899885s STEP: Saw pod success May 16 14:29:45.590: INFO: Pod "pod-configmaps-9a504e2d-2823-4a8e-ad32-a510171143d2" satisfied condition "success or failure" May 16 14:29:45.593: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-9a504e2d-2823-4a8e-ad32-a510171143d2 container configmap-volume-test: STEP: delete the pod May 16 14:29:45.628: INFO: Waiting for pod pod-configmaps-9a504e2d-2823-4a8e-ad32-a510171143d2 to disappear May 16 14:29:45.643: INFO: Pod pod-configmaps-9a504e2d-2823-4a8e-ad32-a510171143d2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:29:45.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7398" for this suite. May 16 14:29:51.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:29:51.740: INFO: namespace configmap-7398 deletion completed in 6.094684007s • [SLOW TEST:10.321 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:29:51.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-feb1eeb3-dc60-48f6-bc6c-7f0ac9d23d8e STEP: Creating a pod to test consume configMaps May 16 14:29:51.892: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-924fe0e5-2fd8-4054-adf0-e2d449d35d23" in namespace "projected-9320" to be "success or failure" May 16 14:29:51.900: INFO: Pod "pod-projected-configmaps-924fe0e5-2fd8-4054-adf0-e2d449d35d23": Phase="Pending", Reason="", readiness=false. Elapsed: 8.446282ms May 16 14:29:53.904: INFO: Pod "pod-projected-configmaps-924fe0e5-2fd8-4054-adf0-e2d449d35d23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011964934s May 16 14:29:55.908: INFO: Pod "pod-projected-configmaps-924fe0e5-2fd8-4054-adf0-e2d449d35d23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016493055s STEP: Saw pod success May 16 14:29:55.908: INFO: Pod "pod-projected-configmaps-924fe0e5-2fd8-4054-adf0-e2d449d35d23" satisfied condition "success or failure" May 16 14:29:55.911: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-924fe0e5-2fd8-4054-adf0-e2d449d35d23 container projected-configmap-volume-test: STEP: delete the pod May 16 14:29:55.963: INFO: Waiting for pod pod-projected-configmaps-924fe0e5-2fd8-4054-adf0-e2d449d35d23 to disappear May 16 14:29:55.978: INFO: Pod pod-projected-configmaps-924fe0e5-2fd8-4054-adf0-e2d449d35d23 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:29:55.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9320" for this suite. May 16 14:30:02.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:30:02.213: INFO: namespace projected-9320 deletion completed in 6.231932389s • [SLOW TEST:10.472 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:30:02.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 16 14:30:02.287: INFO: Waiting up to 5m0s for pod "pod-c1359303-0fa3-4ce3-a099-5526afd113ef" in namespace "emptydir-3967" to be "success or failure" May 16 14:30:02.310: INFO: Pod "pod-c1359303-0fa3-4ce3-a099-5526afd113ef": Phase="Pending", Reason="", readiness=false. Elapsed: 23.39218ms May 16 14:30:04.315: INFO: Pod "pod-c1359303-0fa3-4ce3-a099-5526afd113ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027957194s May 16 14:30:06.318: INFO: Pod "pod-c1359303-0fa3-4ce3-a099-5526afd113ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031497296s STEP: Saw pod success May 16 14:30:06.318: INFO: Pod "pod-c1359303-0fa3-4ce3-a099-5526afd113ef" satisfied condition "success or failure" May 16 14:30:06.320: INFO: Trying to get logs from node iruya-worker pod pod-c1359303-0fa3-4ce3-a099-5526afd113ef container test-container: STEP: delete the pod May 16 14:30:06.339: INFO: Waiting for pod pod-c1359303-0fa3-4ce3-a099-5526afd113ef to disappear May 16 14:30:06.343: INFO: Pod pod-c1359303-0fa3-4ce3-a099-5526afd113ef no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:30:06.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3967" for this suite. May 16 14:30:12.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:30:12.443: INFO: namespace emptydir-3967 deletion completed in 6.0971589s • [SLOW TEST:10.229 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:30:12.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-lc6j STEP: Creating a pod to test atomic-volume-subpath May 16 14:30:12.537: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lc6j" in namespace "subpath-943" to be "success or failure" May 16 14:30:12.541: INFO: Pod "pod-subpath-test-configmap-lc6j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.420239ms May 16 14:30:14.545: INFO: Pod "pod-subpath-test-configmap-lc6j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007724736s May 16 14:30:16.548: INFO: Pod "pod-subpath-test-configmap-lc6j": Phase="Running", Reason="", readiness=true. Elapsed: 4.011326419s May 16 14:30:18.552: INFO: Pod "pod-subpath-test-configmap-lc6j": Phase="Running", Reason="", readiness=true. Elapsed: 6.015184506s May 16 14:30:20.557: INFO: Pod "pod-subpath-test-configmap-lc6j": Phase="Running", Reason="", readiness=true. Elapsed: 8.019518041s May 16 14:30:22.560: INFO: Pod "pod-subpath-test-configmap-lc6j": Phase="Running", Reason="", readiness=true. Elapsed: 10.023376386s May 16 14:30:24.564: INFO: Pod "pod-subpath-test-configmap-lc6j": Phase="Running", Reason="", readiness=true. Elapsed: 12.027379585s May 16 14:30:26.569: INFO: Pod "pod-subpath-test-configmap-lc6j": Phase="Running", Reason="", readiness=true. Elapsed: 14.03180654s May 16 14:30:28.574: INFO: Pod "pod-subpath-test-configmap-lc6j": Phase="Running", Reason="", readiness=true. Elapsed: 16.036709592s May 16 14:30:30.578: INFO: Pod "pod-subpath-test-configmap-lc6j": Phase="Running", Reason="", readiness=true. Elapsed: 18.041212547s May 16 14:30:32.583: INFO: Pod "pod-subpath-test-configmap-lc6j": Phase="Running", Reason="", readiness=true. Elapsed: 20.046128018s May 16 14:30:34.588: INFO: Pod "pod-subpath-test-configmap-lc6j": Phase="Running", Reason="", readiness=true. Elapsed: 22.050678373s May 16 14:30:36.592: INFO: Pod "pod-subpath-test-configmap-lc6j": Phase="Running", Reason="", readiness=true. Elapsed: 24.055140704s May 16 14:30:38.597: INFO: Pod "pod-subpath-test-configmap-lc6j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.059995652s STEP: Saw pod success May 16 14:30:38.597: INFO: Pod "pod-subpath-test-configmap-lc6j" satisfied condition "success or failure" May 16 14:30:38.601: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-lc6j container test-container-subpath-configmap-lc6j: STEP: delete the pod May 16 14:30:38.638: INFO: Waiting for pod pod-subpath-test-configmap-lc6j to disappear May 16 14:30:38.662: INFO: Pod pod-subpath-test-configmap-lc6j no longer exists STEP: Deleting pod pod-subpath-test-configmap-lc6j May 16 14:30:38.662: INFO: Deleting pod "pod-subpath-test-configmap-lc6j" in namespace "subpath-943" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:30:38.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-943" for this suite. May 16 14:30:44.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:30:44.772: INFO: namespace subpath-943 deletion completed in 6.105224908s • [SLOW TEST:32.329 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:30:44.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 16 14:30:44.813: INFO: Waiting up to 5m0s for pod "pod-80e370e1-b23d-4b7d-afc1-1630453fe1ce" in namespace "emptydir-8757" to be "success or failure" May 16 14:30:44.903: INFO: Pod "pod-80e370e1-b23d-4b7d-afc1-1630453fe1ce": Phase="Pending", Reason="", readiness=false. Elapsed: 89.916988ms May 16 14:30:46.907: INFO: Pod "pod-80e370e1-b23d-4b7d-afc1-1630453fe1ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094519799s May 16 14:30:48.912: INFO: Pod "pod-80e370e1-b23d-4b7d-afc1-1630453fe1ce": Phase="Running", Reason="", readiness=true. Elapsed: 4.099404913s May 16 14:30:50.917: INFO: Pod "pod-80e370e1-b23d-4b7d-afc1-1630453fe1ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.104370602s STEP: Saw pod success May 16 14:30:50.917: INFO: Pod "pod-80e370e1-b23d-4b7d-afc1-1630453fe1ce" satisfied condition "success or failure" May 16 14:30:50.920: INFO: Trying to get logs from node iruya-worker2 pod pod-80e370e1-b23d-4b7d-afc1-1630453fe1ce container test-container: STEP: delete the pod May 16 14:30:50.938: INFO: Waiting for pod pod-80e370e1-b23d-4b7d-afc1-1630453fe1ce to disappear May 16 14:30:50.943: INFO: Pod pod-80e370e1-b23d-4b7d-afc1-1630453fe1ce no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:30:50.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8757" for this suite. May 16 14:30:56.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:30:57.046: INFO: namespace emptydir-8757 deletion completed in 6.09965707s • [SLOW TEST:12.273 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:30:57.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 16 14:30:57.168: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9452,SelfLink:/api/v1/namespaces/watch-9452/configmaps/e2e-watch-test-label-changed,UID:f7e059b5-a9d0-4723-a434-abb48d1b1e46,ResourceVersion:11231974,Generation:0,CreationTimestamp:2020-05-16 14:30:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 16 14:30:57.168: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9452,SelfLink:/api/v1/namespaces/watch-9452/configmaps/e2e-watch-test-label-changed,UID:f7e059b5-a9d0-4723-a434-abb48d1b1e46,ResourceVersion:11231975,Generation:0,CreationTimestamp:2020-05-16 14:30:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 16 14:30:57.168: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9452,SelfLink:/api/v1/namespaces/watch-9452/configmaps/e2e-watch-test-label-changed,UID:f7e059b5-a9d0-4723-a434-abb48d1b1e46,ResourceVersion:11231976,Generation:0,CreationTimestamp:2020-05-16 14:30:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 16 14:31:07.215: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9452,SelfLink:/api/v1/namespaces/watch-9452/configmaps/e2e-watch-test-label-changed,UID:f7e059b5-a9d0-4723-a434-abb48d1b1e46,ResourceVersion:11231997,Generation:0,CreationTimestamp:2020-05-16 14:30:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 16 14:31:07.215: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9452,SelfLink:/api/v1/namespaces/watch-9452/configmaps/e2e-watch-test-label-changed,UID:f7e059b5-a9d0-4723-a434-abb48d1b1e46,ResourceVersion:11231998,Generation:0,CreationTimestamp:2020-05-16 14:30:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 16 14:31:07.215: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9452,SelfLink:/api/v1/namespaces/watch-9452/configmaps/e2e-watch-test-label-changed,UID:f7e059b5-a9d0-4723-a434-abb48d1b1e46,ResourceVersion:11231999,Generation:0,CreationTimestamp:2020-05-16 14:30:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:31:07.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9452" for this suite. May 16 14:31:13.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:31:13.341: INFO: namespace watch-9452 deletion completed in 6.1153519s • [SLOW TEST:16.295 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:31:13.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-44e781b8-3f18-41d3-9151-138a939f5bdc STEP: Creating a pod to test consume configMaps May 16 14:31:13.431: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-589e3bb4-b4bf-4f70-b760-fb659964c18d" in namespace "projected-3348" to be "success or failure" May 16 14:31:13.472: INFO: Pod "pod-projected-configmaps-589e3bb4-b4bf-4f70-b760-fb659964c18d": Phase="Pending", Reason="", readiness=false. Elapsed: 40.26533ms May 16 14:31:15.475: INFO: Pod "pod-projected-configmaps-589e3bb4-b4bf-4f70-b760-fb659964c18d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044113067s May 16 14:31:17.479: INFO: Pod "pod-projected-configmaps-589e3bb4-b4bf-4f70-b760-fb659964c18d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04755201s STEP: Saw pod success May 16 14:31:17.479: INFO: Pod "pod-projected-configmaps-589e3bb4-b4bf-4f70-b760-fb659964c18d" satisfied condition "success or failure" May 16 14:31:17.481: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-589e3bb4-b4bf-4f70-b760-fb659964c18d container projected-configmap-volume-test: STEP: delete the pod May 16 14:31:17.526: INFO: Waiting for pod pod-projected-configmaps-589e3bb4-b4bf-4f70-b760-fb659964c18d to disappear May 16 14:31:17.564: INFO: Pod pod-projected-configmaps-589e3bb4-b4bf-4f70-b760-fb659964c18d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:31:17.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3348" for this suite. May 16 14:31:23.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:31:23.698: INFO: namespace projected-3348 deletion completed in 6.13084616s • [SLOW TEST:10.356 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:31:23.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 16 14:31:23.760: INFO: Waiting up to 5m0s for pod "pod-7a49c651-e342-48a5-b666-3c9ff21df8e8" in namespace "emptydir-8328" to be "success or failure" May 16 14:31:23.774: INFO: Pod "pod-7a49c651-e342-48a5-b666-3c9ff21df8e8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.775804ms May 16 14:31:25.814: INFO: Pod "pod-7a49c651-e342-48a5-b666-3c9ff21df8e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053925843s May 16 14:31:27.817: INFO: Pod "pod-7a49c651-e342-48a5-b666-3c9ff21df8e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057627007s STEP: Saw pod success May 16 14:31:27.817: INFO: Pod "pod-7a49c651-e342-48a5-b666-3c9ff21df8e8" satisfied condition "success or failure" May 16 14:31:27.822: INFO: Trying to get logs from node iruya-worker2 pod pod-7a49c651-e342-48a5-b666-3c9ff21df8e8 container test-container: STEP: delete the pod May 16 14:31:27.844: INFO: Waiting for pod pod-7a49c651-e342-48a5-b666-3c9ff21df8e8 to disappear May 16 14:31:27.848: INFO: Pod pod-7a49c651-e342-48a5-b666-3c9ff21df8e8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:31:27.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8328" for this suite. May 16 14:31:33.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:31:33.958: INFO: namespace emptydir-8328 deletion completed in 6.100381391s • [SLOW TEST:10.260 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:31:33.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-8c217160-aa8f-4b36-8d77-d43799a6aadd STEP: Creating a pod to test consume configMaps May 16 14:31:34.048: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bf2af5c2-3acc-4e81-a84d-e9178f0c258d" in namespace "projected-457" to be "success or failure" May 16 14:31:34.052: INFO: Pod "pod-projected-configmaps-bf2af5c2-3acc-4e81-a84d-e9178f0c258d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.47476ms May 16 14:31:36.120: INFO: Pod "pod-projected-configmaps-bf2af5c2-3acc-4e81-a84d-e9178f0c258d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071318091s May 16 14:31:38.123: INFO: Pod "pod-projected-configmaps-bf2af5c2-3acc-4e81-a84d-e9178f0c258d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074711204s STEP: Saw pod success May 16 14:31:38.123: INFO: Pod "pod-projected-configmaps-bf2af5c2-3acc-4e81-a84d-e9178f0c258d" satisfied condition "success or failure" May 16 14:31:38.125: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-bf2af5c2-3acc-4e81-a84d-e9178f0c258d container projected-configmap-volume-test: STEP: delete the pod May 16 14:31:38.144: INFO: Waiting for pod pod-projected-configmaps-bf2af5c2-3acc-4e81-a84d-e9178f0c258d to disappear May 16 14:31:38.304: INFO: Pod pod-projected-configmaps-bf2af5c2-3acc-4e81-a84d-e9178f0c258d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:31:38.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-457" for this suite. May 16 14:31:44.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:31:44.541: INFO: namespace projected-457 deletion completed in 6.232425749s • [SLOW TEST:10.582 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:31:44.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod May 16 14:31:44.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7170' May 16 14:31:44.910: INFO: stderr: "" May 16 14:31:44.910: INFO: stdout: "pod/pause created\n" May 16 14:31:44.910: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 16 14:31:44.910: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7170" to be "running and ready" May 16 14:31:44.957: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 46.506042ms May 16 14:31:46.961: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051104754s May 16 14:31:48.966: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.055707023s May 16 14:31:48.966: INFO: Pod "pause" satisfied condition "running and ready" May 16 14:31:48.966: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod May 16 14:31:48.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7170' May 16 14:31:49.067: INFO: stderr: "" May 16 14:31:49.067: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 16 14:31:49.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7170' May 16 14:31:49.155: INFO: stderr: "" May 16 14:31:49.155: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 16 14:31:49.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7170' May 16 14:31:49.248: INFO: stderr: "" May 16 14:31:49.248: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 16 14:31:49.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7170' May 16 14:31:49.357: INFO: stderr: "" May 16 14:31:49.357: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources May 16 14:31:49.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7170' May 16 14:31:49.489: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 14:31:49.489: INFO: stdout: "pod \"pause\" force deleted\n" May 16 14:31:49.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7170' May 16 14:31:49.599: INFO: stderr: "No resources found.\n" May 16 14:31:49.599: INFO: stdout: "" May 16 14:31:49.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7170 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 16 14:31:49.862: INFO: stderr: "" May 16 14:31:49.862: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:31:49.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7170" for this suite. May 16 14:31:55.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:31:55.988: INFO: namespace kubectl-7170 deletion completed in 6.122029653s • [SLOW TEST:11.446 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:31:55.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs May 16 14:31:56.024: INFO: Waiting up to 5m0s for pod "pod-5d2c7dd2-be15-441a-9d78-635b1c324f1c" in namespace "emptydir-596" to be "success or failure" May 16 14:31:56.078: INFO: Pod "pod-5d2c7dd2-be15-441a-9d78-635b1c324f1c": Phase="Pending", Reason="", readiness=false. Elapsed: 53.552765ms May 16 14:31:58.082: INFO: Pod "pod-5d2c7dd2-be15-441a-9d78-635b1c324f1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057228116s May 16 14:32:00.086: INFO: Pod "pod-5d2c7dd2-be15-441a-9d78-635b1c324f1c": Phase="Running", Reason="", readiness=true. Elapsed: 4.061633109s May 16 14:32:02.090: INFO: Pod "pod-5d2c7dd2-be15-441a-9d78-635b1c324f1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065772251s STEP: Saw pod success May 16 14:32:02.090: INFO: Pod "pod-5d2c7dd2-be15-441a-9d78-635b1c324f1c" satisfied condition "success or failure" May 16 14:32:02.093: INFO: Trying to get logs from node iruya-worker2 pod pod-5d2c7dd2-be15-441a-9d78-635b1c324f1c container test-container: STEP: delete the pod May 16 14:32:02.133: INFO: Waiting for pod pod-5d2c7dd2-be15-441a-9d78-635b1c324f1c to disappear May 16 14:32:02.160: INFO: Pod pod-5d2c7dd2-be15-441a-9d78-635b1c324f1c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:32:02.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-596" for this suite. May 16 14:32:08.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:32:08.260: INFO: namespace emptydir-596 deletion completed in 6.095772475s • [SLOW TEST:12.272 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:32:08.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 16 14:32:12.882: INFO: Successfully updated pod "annotationupdate88dcc758-3a98-4b99-9ebf-dc0e49fe3c26" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:32:16.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9177" for this suite. May 16 14:32:38.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:32:39.041: INFO: namespace projected-9177 deletion completed in 22.099117241s • [SLOW TEST:30.781 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:32:39.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-f2b3f59e-b020-457a-a0a9-c2f0b9902abd STEP: Creating a pod to test consume configMaps May 16 14:32:39.122: INFO: Waiting up to 5m0s for pod "pod-configmaps-67d9e9b9-84ee-48c7-a947-b3c795e32dd4" in namespace "configmap-8280" to be "success or failure" May 16 14:32:39.132: INFO: Pod "pod-configmaps-67d9e9b9-84ee-48c7-a947-b3c795e32dd4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.231776ms May 16 14:32:41.135: INFO: Pod "pod-configmaps-67d9e9b9-84ee-48c7-a947-b3c795e32dd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013132309s May 16 14:32:43.140: INFO: Pod "pod-configmaps-67d9e9b9-84ee-48c7-a947-b3c795e32dd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018066645s STEP: Saw pod success May 16 14:32:43.140: INFO: Pod "pod-configmaps-67d9e9b9-84ee-48c7-a947-b3c795e32dd4" satisfied condition "success or failure" May 16 14:32:43.144: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-67d9e9b9-84ee-48c7-a947-b3c795e32dd4 container configmap-volume-test: STEP: delete the pod May 16 14:32:43.163: INFO: Waiting for pod pod-configmaps-67d9e9b9-84ee-48c7-a947-b3c795e32dd4 to disappear May 16 14:32:43.167: INFO: Pod pod-configmaps-67d9e9b9-84ee-48c7-a947-b3c795e32dd4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:32:43.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8280" for this suite. May 16 14:32:49.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:32:49.251: INFO: namespace configmap-8280 deletion completed in 6.08177726s • [SLOW TEST:10.210 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:32:49.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 16 14:32:49.361: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6d7f081d-a804-46a7-8b7f-223a0c43199c" in namespace "projected-5254" to be "success or failure" May 16 14:32:49.365: INFO: Pod "downwardapi-volume-6d7f081d-a804-46a7-8b7f-223a0c43199c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.672403ms May 16 14:32:51.370: INFO: Pod "downwardapi-volume-6d7f081d-a804-46a7-8b7f-223a0c43199c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008564688s May 16 14:32:53.374: INFO: Pod "downwardapi-volume-6d7f081d-a804-46a7-8b7f-223a0c43199c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012682917s STEP: Saw pod success May 16 14:32:53.374: INFO: Pod "downwardapi-volume-6d7f081d-a804-46a7-8b7f-223a0c43199c" satisfied condition "success or failure" May 16 14:32:53.376: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-6d7f081d-a804-46a7-8b7f-223a0c43199c container client-container: STEP: delete the pod May 16 14:32:53.425: INFO: Waiting for pod downwardapi-volume-6d7f081d-a804-46a7-8b7f-223a0c43199c to disappear May 16 14:32:53.509: INFO: Pod downwardapi-volume-6d7f081d-a804-46a7-8b7f-223a0c43199c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:32:53.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5254" for this suite. May 16 14:32:59.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:32:59.605: INFO: namespace projected-5254 deletion completed in 6.092009378s • [SLOW TEST:10.353 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:32:59.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:33:03.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9161" for this suite. May 16 14:33:09.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:33:09.972: INFO: namespace emptydir-wrapper-9161 deletion completed in 6.085414671s • [SLOW TEST:10.367 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:33:09.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 14:33:10.034: INFO: Creating ReplicaSet my-hostname-basic-284b0292-a059-4342-8099-350f69979d7f May 16 14:33:10.064: INFO: Pod name my-hostname-basic-284b0292-a059-4342-8099-350f69979d7f: Found 0 pods out of 1 May 16 14:33:15.068: INFO: Pod name my-hostname-basic-284b0292-a059-4342-8099-350f69979d7f: Found 1 pods out of 1 May 16 14:33:15.068: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-284b0292-a059-4342-8099-350f69979d7f" is running May 16 14:33:15.071: INFO: Pod "my-hostname-basic-284b0292-a059-4342-8099-350f69979d7f-qzxz4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 14:33:10 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 14:33:13 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 14:33:13 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-16 14:33:10 +0000 UTC Reason: Message:}]) May 16 14:33:15.072: INFO: Trying to dial the pod May 16 14:33:20.081: INFO: Controller my-hostname-basic-284b0292-a059-4342-8099-350f69979d7f: Got expected result from replica 1 [my-hostname-basic-284b0292-a059-4342-8099-350f69979d7f-qzxz4]: "my-hostname-basic-284b0292-a059-4342-8099-350f69979d7f-qzxz4", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:33:20.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-590" for this suite. May 16 14:33:26.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:33:26.177: INFO: namespace replicaset-590 deletion completed in 6.09295024s • [SLOW TEST:16.205 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:33:26.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 16 14:33:26.301: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5491,SelfLink:/api/v1/namespaces/watch-5491/configmaps/e2e-watch-test-resource-version,UID:35dc1ac8-7107-44c2-8b6c-c1446dd9ed6b,ResourceVersion:11232530,Generation:0,CreationTimestamp:2020-05-16 14:33:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 16 14:33:26.301: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5491,SelfLink:/api/v1/namespaces/watch-5491/configmaps/e2e-watch-test-resource-version,UID:35dc1ac8-7107-44c2-8b6c-c1446dd9ed6b,ResourceVersion:11232531,Generation:0,CreationTimestamp:2020-05-16 14:33:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:33:26.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5491" for this suite. May 16 14:33:32.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:33:32.411: INFO: namespace watch-5491 deletion completed in 6.106533657s • [SLOW TEST:6.233 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:33:32.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-304188d7-f724-4838-8b1f-8e5fc89bfcc3 STEP: Creating a pod to test consume secrets May 16 14:33:32.499: INFO: Waiting up to 5m0s for pod "pod-secrets-2b405165-5110-4f83-ad93-a2f5fe529465" in namespace "secrets-2060" to be "success or failure" May 16 14:33:32.503: INFO: Pod "pod-secrets-2b405165-5110-4f83-ad93-a2f5fe529465": Phase="Pending", Reason="", readiness=false. Elapsed: 3.977243ms May 16 14:33:34.507: INFO: Pod "pod-secrets-2b405165-5110-4f83-ad93-a2f5fe529465": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008299732s May 16 14:33:36.511: INFO: Pod "pod-secrets-2b405165-5110-4f83-ad93-a2f5fe529465": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012295277s STEP: Saw pod success May 16 14:33:36.511: INFO: Pod "pod-secrets-2b405165-5110-4f83-ad93-a2f5fe529465" satisfied condition "success or failure" May 16 14:33:36.514: INFO: Trying to get logs from node iruya-worker pod pod-secrets-2b405165-5110-4f83-ad93-a2f5fe529465 container secret-volume-test: STEP: delete the pod May 16 14:33:36.593: INFO: Waiting for pod pod-secrets-2b405165-5110-4f83-ad93-a2f5fe529465 to disappear May 16 14:33:36.604: INFO: Pod pod-secrets-2b405165-5110-4f83-ad93-a2f5fe529465 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:33:36.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2060" for this suite. May 16 14:33:42.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:33:42.816: INFO: namespace secrets-2060 deletion completed in 6.208730582s • [SLOW TEST:10.403 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:33:42.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 16 14:33:42.876: INFO: Waiting up to 5m0s for pod "pod-6a3b1362-b802-4095-9478-535eff95a3a6" in namespace "emptydir-9274" to be "success or failure" May 16 14:33:42.881: INFO: Pod "pod-6a3b1362-b802-4095-9478-535eff95a3a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.723463ms May 16 14:33:44.884: INFO: Pod "pod-6a3b1362-b802-4095-9478-535eff95a3a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008580428s May 16 14:33:46.889: INFO: Pod "pod-6a3b1362-b802-4095-9478-535eff95a3a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013573276s STEP: Saw pod success May 16 14:33:46.889: INFO: Pod "pod-6a3b1362-b802-4095-9478-535eff95a3a6" satisfied condition "success or failure" May 16 14:33:46.892: INFO: Trying to get logs from node iruya-worker2 pod pod-6a3b1362-b802-4095-9478-535eff95a3a6 container test-container: STEP: delete the pod May 16 14:33:46.956: INFO: Waiting for pod pod-6a3b1362-b802-4095-9478-535eff95a3a6 to disappear May 16 14:33:46.964: INFO: Pod pod-6a3b1362-b802-4095-9478-535eff95a3a6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:33:46.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9274" for this suite. May 16 14:33:53.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:33:53.101: INFO: namespace emptydir-9274 deletion completed in 6.133888095s • [SLOW TEST:10.285 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:33:53.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:34:24.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7610" for this suite. May 16 14:34:30.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:34:30.164: INFO: namespace container-runtime-7610 deletion completed in 6.128132194s • [SLOW TEST:37.062 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:34:30.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 16 14:34:30.209: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:34:38.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8290" for this suite. May 16 14:34:44.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:34:44.588: INFO: namespace init-container-8290 deletion completed in 6.093104846s • [SLOW TEST:14.424 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:34:44.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 16 14:34:44.703: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"16be275b-64e7-41af-8c35-353f3ec5ba7d", Controller:(*bool)(0xc002cd16da), BlockOwnerDeletion:(*bool)(0xc002cd16db)}} May 16 14:34:44.728: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"7b46ca37-6912-4635-b98e-ca946358a074", Controller:(*bool)(0xc0027b7fb2), BlockOwnerDeletion:(*bool)(0xc0027b7fb3)}} May 16 14:34:44.744: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"f4088a78-d98e-4093-97b0-19e6fe8b2040", Controller:(*bool)(0xc002cd188a), BlockOwnerDeletion:(*bool)(0xc002cd188b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:34:49.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8312" for this suite. May 16 14:34:55.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:34:55.928: INFO: namespace gc-8312 deletion completed in 6.089794666s • [SLOW TEST:11.338 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:34:55.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-01f850c2-53c1-4b2f-9625-566b46b1f0b1 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-01f850c2-53c1-4b2f-9625-566b46b1f0b1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:35:02.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-884" for this suite. May 16 14:35:24.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:35:24.174: INFO: namespace configmap-884 deletion completed in 22.09736224s • [SLOW TEST:28.244 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:35:24.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container May 16 14:35:28.797: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2027 pod-service-account-da590624-975b-4059-8328-993a5a982254 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 16 14:35:29.044: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2027 pod-service-account-da590624-975b-4059-8328-993a5a982254 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 16 14:35:29.235: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2027 pod-service-account-da590624-975b-4059-8328-993a5a982254 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:35:29.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2027" for this suite. May 16 14:35:35.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:35:35.560: INFO: namespace svcaccounts-2027 deletion completed in 6.113439717s • [SLOW TEST:11.386 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:35:35.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 16 14:35:35.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-3773' May 16 14:35:35.747: INFO: stderr: "" May 16 14:35:35.747: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 16 14:35:40.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-3773 -o json' May 16 14:35:40.900: INFO: stderr: "" May 16 14:35:40.900: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-16T14:35:35Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-3773\",\n \"resourceVersion\": \"11233076\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3773/pods/e2e-test-nginx-pod\",\n \"uid\": \"5f0b0c60-7870-4ff9-b48a-a57391f1dcca\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-82kcg\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-82kcg\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-82kcg\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-16T14:35:35Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-16T14:35:39Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-16T14:35:39Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-16T14:35:35Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://43a50debc1fc9e28bbee84fe283bb5f972552687e66d52fe7854b66894c103bb\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-16T14:35:38Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.27\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-16T14:35:35Z\"\n }\n}\n" STEP: replace the image in the pod May 16 14:35:40.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3773' May 16 14:35:41.197: INFO: stderr: "" May 16 14:35:41.197: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 May 16 14:35:41.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3773' May 16 14:35:51.895: INFO: stderr: "" May 16 14:35:51.895: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:35:51.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3773" for this suite. May 16 14:35:57.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:35:57.987: INFO: namespace kubectl-3773 deletion completed in 6.086869508s • [SLOW TEST:22.427 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:35:57.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-1a1b9e8f-2e46-42e4-9559-a9eed218629c STEP: Creating a pod to test consume configMaps May 16 14:35:58.055: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e788ef02-0c74-4bc6-85d3-bb66426e8269" in namespace "projected-544" to be "success or failure" May 16 14:35:58.065: INFO: Pod "pod-projected-configmaps-e788ef02-0c74-4bc6-85d3-bb66426e8269": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119209ms May 16 14:36:00.153: INFO: Pod "pod-projected-configmaps-e788ef02-0c74-4bc6-85d3-bb66426e8269": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098058396s May 16 14:36:02.157: INFO: Pod "pod-projected-configmaps-e788ef02-0c74-4bc6-85d3-bb66426e8269": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102186712s STEP: Saw pod success May 16 14:36:02.157: INFO: Pod "pod-projected-configmaps-e788ef02-0c74-4bc6-85d3-bb66426e8269" satisfied condition "success or failure" May 16 14:36:02.159: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-e788ef02-0c74-4bc6-85d3-bb66426e8269 container projected-configmap-volume-test: STEP: delete the pod May 16 14:36:02.208: INFO: Waiting for pod pod-projected-configmaps-e788ef02-0c74-4bc6-85d3-bb66426e8269 to disappear May 16 14:36:02.227: INFO: Pod pod-projected-configmaps-e788ef02-0c74-4bc6-85d3-bb66426e8269 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:36:02.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-544" for this suite. May 16 14:36:08.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:36:08.338: INFO: namespace projected-544 deletion completed in 6.107249404s • [SLOW TEST:10.350 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:36:08.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ff8890bf-b3d9-4cb9-b49e-3f1a4ec01ccc STEP: Creating a pod to test consume secrets May 16 14:36:08.449: INFO: Waiting up to 5m0s for pod "pod-secrets-50cc97df-aebd-4e20-b300-ef5d1528f9a3" in namespace "secrets-7846" to be "success or failure" May 16 14:36:08.467: INFO: Pod "pod-secrets-50cc97df-aebd-4e20-b300-ef5d1528f9a3": Phase="Pending", Reason="", readiness=false. Elapsed: 17.889617ms May 16 14:36:10.471: INFO: Pod "pod-secrets-50cc97df-aebd-4e20-b300-ef5d1528f9a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021574682s May 16 14:36:12.475: INFO: Pod "pod-secrets-50cc97df-aebd-4e20-b300-ef5d1528f9a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025616224s STEP: Saw pod success May 16 14:36:12.475: INFO: Pod "pod-secrets-50cc97df-aebd-4e20-b300-ef5d1528f9a3" satisfied condition "success or failure" May 16 14:36:12.478: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-50cc97df-aebd-4e20-b300-ef5d1528f9a3 container secret-env-test: STEP: delete the pod May 16 14:36:12.495: INFO: Waiting for pod pod-secrets-50cc97df-aebd-4e20-b300-ef5d1528f9a3 to disappear May 16 14:36:12.499: INFO: Pod pod-secrets-50cc97df-aebd-4e20-b300-ef5d1528f9a3 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:36:12.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7846" for this suite. May 16 14:36:18.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:36:18.617: INFO: namespace secrets-7846 deletion completed in 6.09266969s • [SLOW TEST:10.279 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:36:18.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args May 16 14:36:18.755: INFO: Waiting up to 5m0s for pod "var-expansion-99c05671-ecd4-4b00-90ae-ecb43f9a8189" in namespace "var-expansion-258" to be "success or failure" May 16 14:36:18.770: INFO: Pod "var-expansion-99c05671-ecd4-4b00-90ae-ecb43f9a8189": Phase="Pending", Reason="", readiness=false. Elapsed: 15.259812ms May 16 14:36:20.788: INFO: Pod "var-expansion-99c05671-ecd4-4b00-90ae-ecb43f9a8189": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03299947s May 16 14:36:22.792: INFO: Pod "var-expansion-99c05671-ecd4-4b00-90ae-ecb43f9a8189": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037820905s STEP: Saw pod success May 16 14:36:22.792: INFO: Pod "var-expansion-99c05671-ecd4-4b00-90ae-ecb43f9a8189" satisfied condition "success or failure" May 16 14:36:22.796: INFO: Trying to get logs from node iruya-worker pod var-expansion-99c05671-ecd4-4b00-90ae-ecb43f9a8189 container dapi-container: STEP: delete the pod May 16 14:36:22.890: INFO: Waiting for pod var-expansion-99c05671-ecd4-4b00-90ae-ecb43f9a8189 to disappear May 16 14:36:22.925: INFO: Pod var-expansion-99c05671-ecd4-4b00-90ae-ecb43f9a8189 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:36:22.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-258" for this suite. May 16 14:36:28.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:36:29.028: INFO: namespace var-expansion-258 deletion completed in 6.099287837s • [SLOW TEST:10.411 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:36:29.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 16 14:36:29.153: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4330,SelfLink:/api/v1/namespaces/watch-4330/configmaps/e2e-watch-test-watch-closed,UID:cafe6dac-a37d-4d5c-86a4-80cee276fe45,ResourceVersion:11233270,Generation:0,CreationTimestamp:2020-05-16 14:36:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 16 14:36:29.153: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4330,SelfLink:/api/v1/namespaces/watch-4330/configmaps/e2e-watch-test-watch-closed,UID:cafe6dac-a37d-4d5c-86a4-80cee276fe45,ResourceVersion:11233271,Generation:0,CreationTimestamp:2020-05-16 14:36:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 16 14:36:29.166: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4330,SelfLink:/api/v1/namespaces/watch-4330/configmaps/e2e-watch-test-watch-closed,UID:cafe6dac-a37d-4d5c-86a4-80cee276fe45,ResourceVersion:11233272,Generation:0,CreationTimestamp:2020-05-16 14:36:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 16 14:36:29.166: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4330,SelfLink:/api/v1/namespaces/watch-4330/configmaps/e2e-watch-test-watch-closed,UID:cafe6dac-a37d-4d5c-86a4-80cee276fe45,ResourceVersion:11233273,Generation:0,CreationTimestamp:2020-05-16 14:36:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:36:29.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4330" for this suite. May 16 14:36:35.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:36:35.414: INFO: namespace watch-4330 deletion completed in 6.191373708s • [SLOW TEST:6.386 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:36:35.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:36:39.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7382" for this suite. May 16 14:37:25.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:37:25.611: INFO: namespace kubelet-test-7382 deletion completed in 46.100795094s • [SLOW TEST:50.197 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:37:25.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-97d6 STEP: Creating a pod to test atomic-volume-subpath May 16 14:37:25.682: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-97d6" in namespace "subpath-5523" to be "success or failure" May 16 14:37:25.686: INFO: Pod "pod-subpath-test-configmap-97d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040558ms May 16 14:37:27.807: INFO: Pod "pod-subpath-test-configmap-97d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124508578s May 16 14:37:29.811: INFO: Pod "pod-subpath-test-configmap-97d6": Phase="Running", Reason="", readiness=true. Elapsed: 4.128950313s May 16 14:37:31.816: INFO: Pod "pod-subpath-test-configmap-97d6": Phase="Running", Reason="", readiness=true. Elapsed: 6.13360398s May 16 14:37:33.821: INFO: Pod "pod-subpath-test-configmap-97d6": Phase="Running", Reason="", readiness=true. Elapsed: 8.138135179s May 16 14:37:35.825: INFO: Pod "pod-subpath-test-configmap-97d6": Phase="Running", Reason="", readiness=true. Elapsed: 10.143039544s May 16 14:37:37.843: INFO: Pod "pod-subpath-test-configmap-97d6": Phase="Running", Reason="", readiness=true. Elapsed: 12.160116205s May 16 14:37:39.847: INFO: Pod "pod-subpath-test-configmap-97d6": Phase="Running", Reason="", readiness=true. Elapsed: 14.164374689s May 16 14:37:41.859: INFO: Pod "pod-subpath-test-configmap-97d6": Phase="Running", Reason="", readiness=true. Elapsed: 16.176642861s May 16 14:37:43.864: INFO: Pod "pod-subpath-test-configmap-97d6": Phase="Running", Reason="", readiness=true. Elapsed: 18.181179083s May 16 14:37:45.868: INFO: Pod "pod-subpath-test-configmap-97d6": Phase="Running", Reason="", readiness=true. Elapsed: 20.185366668s May 16 14:37:47.872: INFO: Pod "pod-subpath-test-configmap-97d6": Phase="Running", Reason="", readiness=true. Elapsed: 22.189666098s May 16 14:37:49.876: INFO: Pod "pod-subpath-test-configmap-97d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.193127844s STEP: Saw pod success May 16 14:37:49.876: INFO: Pod "pod-subpath-test-configmap-97d6" satisfied condition "success or failure" May 16 14:37:49.878: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-97d6 container test-container-subpath-configmap-97d6: STEP: delete the pod May 16 14:37:49.895: INFO: Waiting for pod pod-subpath-test-configmap-97d6 to disappear May 16 14:37:50.062: INFO: Pod pod-subpath-test-configmap-97d6 no longer exists STEP: Deleting pod pod-subpath-test-configmap-97d6 May 16 14:37:50.062: INFO: Deleting pod "pod-subpath-test-configmap-97d6" in namespace "subpath-5523" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:37:50.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5523" for this suite. May 16 14:37:56.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:37:56.218: INFO: namespace subpath-5523 deletion completed in 6.149175605s • [SLOW TEST:30.607 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 16 14:37:56.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 16 14:37:56.298: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5891,SelfLink:/api/v1/namespaces/watch-5891/configmaps/e2e-watch-test-configmap-a,UID:711970a9-e524-4290-8d12-376334bdcb50,ResourceVersion:11233504,Generation:0,CreationTimestamp:2020-05-16 14:37:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 16 14:37:56.298: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5891,SelfLink:/api/v1/namespaces/watch-5891/configmaps/e2e-watch-test-configmap-a,UID:711970a9-e524-4290-8d12-376334bdcb50,ResourceVersion:11233504,Generation:0,CreationTimestamp:2020-05-16 14:37:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 16 14:38:06.307: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5891,SelfLink:/api/v1/namespaces/watch-5891/configmaps/e2e-watch-test-configmap-a,UID:711970a9-e524-4290-8d12-376334bdcb50,ResourceVersion:11233524,Generation:0,CreationTimestamp:2020-05-16 14:37:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 16 14:38:06.307: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5891,SelfLink:/api/v1/namespaces/watch-5891/configmaps/e2e-watch-test-configmap-a,UID:711970a9-e524-4290-8d12-376334bdcb50,ResourceVersion:11233524,Generation:0,CreationTimestamp:2020-05-16 14:37:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 16 14:38:16.316: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5891,SelfLink:/api/v1/namespaces/watch-5891/configmaps/e2e-watch-test-configmap-a,UID:711970a9-e524-4290-8d12-376334bdcb50,ResourceVersion:11233544,Generation:0,CreationTimestamp:2020-05-16 14:37:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 16 14:38:16.316: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5891,SelfLink:/api/v1/namespaces/watch-5891/configmaps/e2e-watch-test-configmap-a,UID:711970a9-e524-4290-8d12-376334bdcb50,ResourceVersion:11233544,Generation:0,CreationTimestamp:2020-05-16 14:37:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 16 14:38:26.323: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5891,SelfLink:/api/v1/namespaces/watch-5891/configmaps/e2e-watch-test-configmap-a,UID:711970a9-e524-4290-8d12-376334bdcb50,ResourceVersion:11233564,Generation:0,CreationTimestamp:2020-05-16 14:37:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 16 14:38:26.323: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5891,SelfLink:/api/v1/namespaces/watch-5891/configmaps/e2e-watch-test-configmap-a,UID:711970a9-e524-4290-8d12-376334bdcb50,ResourceVersion:11233564,Generation:0,CreationTimestamp:2020-05-16 14:37:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 16 14:38:36.354: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5891,SelfLink:/api/v1/namespaces/watch-5891/configmaps/e2e-watch-test-configmap-b,UID:ff41e7d9-46f0-4822-b883-759d19267863,ResourceVersion:11233586,Generation:0,CreationTimestamp:2020-05-16 14:38:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 16 14:38:36.355: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5891,SelfLink:/api/v1/namespaces/watch-5891/configmaps/e2e-watch-test-configmap-b,UID:ff41e7d9-46f0-4822-b883-759d19267863,ResourceVersion:11233586,Generation:0,CreationTimestamp:2020-05-16 14:38:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 16 14:38:46.362: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5891,SelfLink:/api/v1/namespaces/watch-5891/configmaps/e2e-watch-test-configmap-b,UID:ff41e7d9-46f0-4822-b883-759d19267863,ResourceVersion:11233606,Generation:0,CreationTimestamp:2020-05-16 14:38:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 16 14:38:46.362: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5891,SelfLink:/api/v1/namespaces/watch-5891/configmaps/e2e-watch-test-configmap-b,UID:ff41e7d9-46f0-4822-b883-759d19267863,ResourceVersion:11233606,Generation:0,CreationTimestamp:2020-05-16 14:38:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 16 14:38:56.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5891" for this suite. May 16 14:39:02.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 16 14:39:02.463: INFO: namespace watch-5891 deletion completed in 6.096174506s • [SLOW TEST:66.243 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSMay 16 14:39:02.464: INFO: Running AfterSuite actions on all nodes May 16 14:39:02.464: INFO: Running AfterSuite actions on node 1 May 16 14:39:02.464: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6198.125 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS